Re: (Open) ZFS is pretty damned good already
You beat me to it, I was just about to say "Holy crap, they reinvented ZFS!"
507 posts • joined 15 Oct 2008
You beat me to it, I was just about to say "Holy crap, they reinvented ZFS!"
"You can tell the Mac KVM Link software which border of your display to use to switch over, and simply dragging your mouse over that border switches any peripherals connected to the USB hub to the other device (your keyboard and mouse need to be connected to the hub, not your system)."
Some of us have been doing this using Synergy between Windows, and various *nix machines for a very long time. And before that there was x2vnc. And all without any additional hardware required.
Security has layers. Sandboxing means that the attacker has to have two exploits available to them - one to breach the application itself, and another to escape the sandbox. While this is not guaranteed to thwart the attack, it makes it more difficult and less likely to succeed.
It is past time that the standard security model on all operating systems is redesigned along the lines similar to Android, where every application runs inside it's own sandbox.
User logs in as "username", and for each application on the system, for each user, an account is created, for example "username_firefox". Firefox then runs as it would if you execute it with "sudo -u username_firefox firefox", and if it is compromised, the only files available to the attacker are the ones available to account "username_firefox", not the parent account "username".
I switched to this model a while back when the Steam bug surfaced that deleted all files on the system the invoking user had permissions to delete.
This isn't even all that inconvenient, since you can simply make the "username_firefox" home directory sticky group owned by group "username" with write permissions, so the parent user can still go and access all the downloaded files and suchlike under the sandbox account.
What I keep wondering is why something like this hasn't been made to be default on any Linux distributions (other than Android, if you want to consider that a Linux distribution) already.
Their open source efforts are indeed great, but since the latest generation of cards is always only supported by the binary drivers, there is no incentive to buy the new generation of GPUs where AMD make their money. For open source usage you have to stick with buying the cards from the current line-up that are re-badged GPUs from the last year's line-up. The benefit for the consumer and the problem for AMD is that those GPUs are cheap new, and even cheaper 2nd hand on ebay.
I haven't done an exhaustive analysis of Nvidia vs. AMD driver bugs, but I have tried various generations of AMD/ATI GPUs, including the HD4870, HD6450, HD7970 and R9 290X, and struggled to get my IBM T221 monitors working with all of them.
HD4870: Randomly switched left and right side of the monitor between reboots for no obvious reason. Had a very annoying rendering bug with transparency/water in several games including Supreme Commander where the shallows were always opaque. I kept persevering for about 6 months before I caved in and got an 8800GT which manifested no obvious problems.
HD6450: Passively cooled, worked great on Linux with the open source driver. It wasn't possible to configure custom modes / refresh rates in Windows. Only AMD GPU I still have.
HD7970: Only one of the DVI ports was dual-link, the other was single link. I needed either two dual link or two single link ports to run my monitors, so I couldn't get this working at all on either Windows or Linux. So I got rid of it and got a GTX680.
R9 290X: No XP drivers even though XP was still supported at the time, I don't recall what the binary Linux driver issues I had was off the top of my head, but open source driver didn't support it. Traded that in for a 780Ti.
Now, you cannot say that this is for lack of giving AMD's solution plenty of chances, but I always ended up with an Nvidia card in the end when I capitulated and needed something that "just works".
The only workable AMD based solutions are, in my experience, on Linux and only the ones that are a generation out of date and fully supported by the radeon open source driver.
Something that "just works" is far, far more important and valuable than chasing scores at the top end, especially since relatively few gamers do in fact buy top of the line cards because they are outrageously expensive.
The only reason why AMD had a good 2014 was because a lot of people were buying their cards for scrypt mining.
They don't have to be faster and cheaper, they just have to work perfectly regardless of the performance bracket they are in. AMD drivers are outrageously buggy and fall apart very quickly in anything resembling an unusual setup (e.g. try running a dual-input monitor like IBM T221 off an ATI card).
It is NOT all about performance. Intel's built in GPUs are very popular for lower end gaming, especially in Linux where the drivers are completely open source. AMD cards also work great when you use open source drivers when you limit yourself to a chipset that is at least a generation behind, but the profits are paper thin in the £50 GPU price bracket, unlike the £500 price range.
IMO AMD would do well to stop competing on performance and start improving the quality, stability and feature set of their drivers. There is no point in trying to compete at the top end where they are pre-emptively disadvantaged when no matter how good their hardware is they will still fall short due to their software.
"Chief security officer Brad Arkin last year told the Australian Information Security Association that its focus on increasing the cost of exploiting Flash and Reader rather than just patching individual vulnerabilities..."
I completely removed it from all of my machines after the Hacking Team fiasco (had it set to "ask to run, and used FlashBlock until then) and can happily report that I have observed no obvious loss of functionality. Uninstalling it makes it _really_ expensive to exploit.
Mozilla has gone through cycles like this more than once before. Back in the days for v1 it was a debloated fork of Netscape.
It then went on a massive diet with v4, where it actually managed to maintain a smaller memory footprint than contemporary Chrome.
It sounds like it is time to lean out the code base and cut out various useless crud. Big deal. It will no doubt happen again some time, but the fact that it is happening periodically is a good sign that there are developers ready to take positive action when things start to get bad.
FF will still have the advantage over Chrome when it comes to packaging due to much more sane and sensible treatment of shared libraries it requires to build against (Chrome has to bring most of not all specific versions of 3rd party projects with it because it won't build against anything else, needless enlarging the memory footprint and reducing performance).
Setting up a tax free country in Antarctica might be cheaper than the Moon. And with the kind of investment they are capable of funding, a comfortable city under all that ice is not an infeasible project.
Something like Rapture from Bioshock.
Seems you beat me to making this point.
Ultimately, all this will achieve is ensure that nobody domiciled in Australia is employed in the process. What'll happen is that an Australian phone redirection service forwards a call to an office in Singapore, and the customer deals with somebody there.
Actually, a multi-tenanted WP setup is very easy to achieve - it is designed for it. I'm sure you can google it.
Features over stability and security is the blight of the 21st century "agile" software development caused by people incapable of handling the concept that you cannot implement before designing, and you cannot design before analysing requirements.
The biggest problem I have with WP is that it is rather difficult to reconcile it's write permission wishes with basic security concepts without extensive per-file hand-crafting of permissions using either ACLs or SELinux or equivalents. The sanest solution I've been able to come up with is to simply make the entire directory subtree it is installed it only readable but not writable by Apache. This, unfortunately, breaks features such as auto-updates and user content uploads.
Keeping the number of 3rd party plugins used to an absolute minimum also helps to reduce the attack surface, since the most (but by no means all) exploits in WP are in 3rd party plugins rather than the core.
... to not having your WordPress folder writable by the apache user...
"So you should start by assuming HDD faults of around 5% per year and do the maths from that, not from claimed BER figures."
5% is near the AFR (Annual Failure Rate) ballpark. That's total failure of the disk, not the BER.
Here is a link to the most recent analysis by Backblaze:
Bit/sector errors are going to be considerably higher (unless you count that a 1TB disk completely failing constitutes 8Terabits of errors).
It is also worth noting that AFR and BER relate to two very distinctly different failure modes. Traditional RAID protects you from complete failures (as measured by AFR), but is massively more wobbly in case of sectors duffing out. There is also a failure mode that is a subclass of the duff sectors and that is latent bit errors, which basically means the disk will feed back duff data rather than throw an error saying the sector was unreadable. This could happen for a number of reasons, including firmware bugs, phantom writes to the wrong sector, head misalignment causing the wrong sector to be read, etc. - and it happens more often than you might think. Here is a link to a very good paper on the subject:
Against these sorts of errors (by far the most dangerous kind), the _only_ available solution is a fully checksumming file system like ZFS, GPFS, or BTRFS (make sure your expectations are suitably low when trying the latter).
"If (i was using RAID and) recovery did go wrong, I'd expect it to recover everything it could, and apologise profusely for the odd file which was lost. If instead it wigs out and fails then you're better off not having it in the first place."
Except that we are talking about block level corruption, which sits underneath the FS level. Unless you are running a full-stack solution like ZFS it won't be trivial to even find out which file's block was occupying the block that failed to scrub out. With any block level corruption, there is a possibility that the entire FS might end up hosed, even if your RAID implementation is clever enough (and many aren't) to give up on the errant block and continue rebuilding the rest of the data.
The problem is compounded by the fact that most disks today come with no Time Limited Error Recovery (TLER), and those that do don't have it enabled by default. So when an unreadable block gets encountered, the disk will repeatedly try to read it while ignoring all other commands. Eventually the higher layers will time out the commands, and kick the disk out. At which point you will have lost the whole second disk from the array, and thus will quite likely need to restore the whole lot from a backup. With TLER, the disk will time out the command much sooner, before it gets kicked out of the array or off the controller.
"1E14 is an awful large number of bits, and no consumer operation gets even close to that per day (1E9 is more likely!)"
10^14 bits is about 11.6TiB. If you are using the new 6TB in 2+1 RAID5 configuration, you are statistically very likely to hit an unrecoverable sector during rebuilding.Depending on how good your RAID implementation is, you might lose just that data block, or it could be far worse - software RAID is much saner than hardware RAID in this case, with there being a decent chance the latter will just crash and burn and lose the whole array.
Nah, ZFS can get a _LOG_ more mileage out of current spinning rust in terms of data durability - to the point where n+2 and n+3 RAID are still quite feasible if you keep the number of disks per array within reasonable limits (e.g. 4+2 or 8+3). While there is no replacement for having good backups, reducing the frequency of having to restore a few TB of data over the internet connection helps.
IMO a much better solution would be to use a fitter for purpose replacement for traditional RAID, such as what ZFS brings to the table.
FIT, I need ya, buddy.
"BigCorp Australia is a separate legal entity, which pays fees for the use of the name and rights to sell BigCorp products. So tax the fees when the money is remitted out of the country."
Err... You want to tax _outgoings_ rather than profits?
"Say you then declare that the amount of tax owed is (wholesale price *15%) or whatever the tax rate is. This then instantly makes the BigCorp product 15% more expensive in your territory, as it is the customers who will pay."
This is called VAT, and it is already charged almost everywhere in the world.
The price per GB of SSD is already down to around 5:1 mark if you consider like for like (1TB 2.5" 7200rpm disk vs. 1TB 2.5" SSD).
Flash also tends to be much more reliable than spinning rust. Sure, there's the write endurance limit on flash, but this is a complete non-issue in just about every sane use-case:
Consider that most of the tested SSDs survived 1PB (yes, that's _peta_) of writes, with some surviving as much as 2PB. To write that much data to them took between 1 and 2 years of continuous writes. To write that much data to a mechanical disk in random order, would take many, many times longer. Sufficiently longer that you would be similarly looking at about 100% failure rate over a similar write volume. Then consider that write:read ratio is typically less than 5%, and that a mechanical disk suffers wear regardless of whether an operation is a read or a write (bearings and actuators will only survive so many seeks and revolutions) you are looking at a real life expectancy of an SSD that only suffers wear on writes that will on average vastly outlive a mechanical drive.
Spinning rust is increasingly struggling to maintain it's relevance in most environments.
I just bought some 1TB SSDs for £225 each. That is a large multiple out from the price quoted for flash.
Does this one get less hot? Or can we still expect the very uncomfortable 47C+ on the aluminium surfaces as measured on the original under heavy load? It's the temperatures under load (e.g. L4D2) that made me give up the original one in favour of something with better engineered ergonomics.
Having had a cursory look at the CVEs, they were published and patched upstream years ago. It is somewhat surprising that nobody noticed these problems in the Seagate NAS-es long before now.
"This is making use of an x86 hardware feature."
It's not an x86 specific feature per se. The testing code uses an x86 assembly instruction that bypasses CPU caches for reads. It is quite likely that similar equivalents exist on many other if not most CPU architectures.
While this is an exploit, it shows that modern hardware is actually unstable out of the box even without overclocking or other tuning that reduces the margins for error. Anything that causes memory corruption on hardware level is, IMO, a hardware fault, and therefore grounds for returning the hardware to the retailer as unfit for purpose.
Given the descriptions of the methods, this is also mostly a RAM fabrication issue, rather than being largely related to the rest of the machine, as the leakage happens directly within the RAM chips. So using better RAM from a different manufacturer would almost certainly reduce the exposure to this bug, much more so than using the same RAM in a different laptop.
But in any case, ECC is the way forward - if only it was more commonly available in laptop and desktop grade chipsets.
"In fact, outside of Xeon CPU's there's almost nothing for the desktop. (There's a few for lappies and embedded but not for desktop)."
FYI, most AMD chipsets still support ECC, whether it is officially listed on the motherboard spec or not.
TRIM support never was that important in the first place with decently designed drives and firmware, and since most flash controller manufacturers started to implement transparent compression and deduplication in firmware, it is even less relevant. Filling the empty space on the drive with 0s periodically will do the exact same thing that TRIM does.
There is indeed thermal throttling on Nvidia chips that stops the GPU core from exceeding 95C. It will progressively slow down the clocks to whatever it takes to keep it under 95C.
This is not new. CPUs and GPUs from their respective duopolies have had such features for the past 10-15 years.
I used to like Asus kit for a long time - right up to the point where I found out the hard way that their warranty related customer service is by far the worst in the industry and in some cases outright in breach of consumer protection laws.
Lenovo permanently lost me as a customer about 2 minutes after I unboxed my Lenovo Y50-70 with the supposed 4K screen - when I discovered that it is a shitty pentile pseudo-4K screen than only has HALF of the number of subpixels that it should, making everything in "4K" look like it was printed on an '80s era dot matrix printer.
The problem is increasingly that all manufacturers (except maybe Apple) are rapidly racing to the bottom and finding a piece of kit that is genuinely good is becoming increasingly difficult.
Translation: "Our manufacturing process is crap, and following high failure rates a number of OEMs have asked us to prevent OC-ing because the warranty claims are killing us."
Frankly, I'm amazed it took this long. On my last laptop I went through 3 GPUs (GeForce GTX260M, followed by two underclocked (yes, I modified the BIOS to underclock and undervolt to keep temperatures reasonably sane) Quadro FX3700Ms).
The simple fact is that Nvidia don't actually make laptop GPUs - they rebadge desktop ones and ship them with a different power profile (lower clock speeds and voltages) programmed into the BIOS.
Either way, this will only be an issue for people who feel the need to be on the drivers' bleeding edge for some reason. The previous version of the driver already supports all the GPUs available today, so this will only really become an even remotely serious issue when the next generation of GPUs comes out (which isn't expected to happen any time soon).
"PLEASE! They're using consumer drives in enterprise gear, says firm"
Except all the other drives in the comparison were also desktop grade drives. It is a like for like comparison. Seagate has yet again been shown to be really crap on reliability.
Their 4TB drives look good - after 1 year in service. But it remains to be seen how they fare after 2-3 years in service, compared to other brands.
Death Star is a reference to Desk Star drive - made by IBM, back when it happened.
Every manufacturer has had a bad model at some point. For IBM -> Hitachi -> HGST it was the 120GB Deskstar series. I had the next model after that, 125GB Deskstar IDE drives, and all 8 of the ones I had survived 10 years of 24/7 use without any failures.
But with Seagate, the failure rates aren't specific to just one model - we are talking about many models over many generations of product. With Seagate, a reliable model is an exception.
I have a Mk1 and I think the screen size is just fine. The main problems with it, IMO, are the terrible touch pad (casing flexes and causes it to click when you don't want it to) and the low res screen.
It is not entirely clear how this compares to the Mk2 Google Chromebook which has 8 CPU cores and also has a 1080p screen.
Prices drop as and when new generations of products come out. We have seen this happen with every manufacturing process change. The prices don't drop in a smooth line - it is a step function, and as process shrinkage becomes increasingly impractical, the manufacturing process improvements aren't happening as steadily as they used to.
Even so, the SSD prices HAVE been dropping quite obviously over the past few years. A 1TB SATA SSD can be had for around £290, which is approximately 10% less than it cost 6 months ago. Back then Crucial M550 was the cheapest. That changed with the introduction of the Samsung EVO 840. So manufacturers are competing and undercutting each other.
You should also consider that prices of specific models rarely change (other than due to currency exchange rates). It is when new models come out that the price per TB reduces. This still holds true.
I don't see any evidence in the prices that might indicate anything like conspiracy going on.
@fnj: I'm in the same boat. HGST and Toshiba drives are the only ones I consider. I am dreadding the day when WD is allowed to swallow HGST.
HGST: Excellent reliability, honest SMART, no firmware based crippling
Toshiba: Very good reliability, honest SMART, no firmware based crippling
Samsung: Livable with reliability, lying SMART, no TLER
WD: Livable with reliability, lying SMART, TLER removed from non-NAS drives' firmware
Seagate: Attrocious reliability, honest-ish SMART (well, more honest than WD and Samsung), TLER removed from non-NAS drives' firmware, most drives do have Write-Read-Verify feature, though.
And if anyone is in doubt about the reliability, read the Backblaze study on this subject.
It is plausible that something may sneak past your defences, but the good news is that 2K3 server patches do work on XP. Or at least they do on XP x64, I don't have 32-bit XP so haven't tried it.
Using 2K3 patches will only keep you going until July next year, but at least that's that's another 8 months you don't have to worry about.
I will be retiring my XP x64 Steam bootloader soon, since Steam and all the games I play (Left4Dead 2, Borderlands series, and Planetary Annihilation) all have native Linux ports.
So yes, people are abandoning XP, but I suspect a significant fraction are abandoning it for non-Windows OS-es. For Gamers Linux is now a viable option, and for non-gaming users I have observed a dramatic shift to Macbooks over the past 3 years. Whereas before there was a sprawl of cheap HP and Dell desktops running Windows in offices, it is increasingly common to see iMacs and Macbooks in their place.
Sadly, there is no Steam for ARM at the moment, but it is only a matter of time before that happens. Now that they have most of the important stuff ported to Linux it will be trivial to rebuild it for ARM.
"when using the low res crap some companies expect us contractors to use"
As a contractor you should be able (and in some cases expected) to use your own equipment.
ThinkPad T60 was available with a 2048x1536 screen - I have one.
I agree. I only just ordered a Lenovo Y50 because for the first time since 2006 that laptops have been available with a resolution that is significantly higher than my ThinkPad T60 from back then, which has a 2048x1536 screen.
Then again, we have had the same problem with desktop monitors. I have a pair of IBM T221s on my desk - 3840x2400. Apple's new Mac and Dell's new 5K monitor due out next month are the first time since 2001 when the T220 came out that the resolution bar has been pushed forward.
Lenovo Y50 really should have made the line-up there.
Quad Core i7
16GB of RAM
I was planning to get the Aorus X3, but when I got around to ordering last week I spotted the Y50 and for me the higher res screen of the Y50 outweighed the slightly faster GPU in the Aorus.
Not to mention that the Lenovo also happens to be significantly cheaper than all of the reviewed models at £1099, and on top of the list price being cheapest, Lenovo are doing 20% cash-back at the moment for the next week or so.
"spending a lot of time fixing an old OS takes resources away from developing the new."
There are two important points on this subject:
1) As the earlier post points out, 2K3 patches work just fine on XP, which obliterates your argument that extra effort is required to support XP in addition to 2K3.
2) Those new features happen to be features that nobody actually wants. We have had the bloated Vista, the Windows 7 that was Vista "lite" which still introduced annoyances such as having to press Alt to bring menus up in the Explorer shell, and Windows 8 which has quite justifiably turned out to be a bigger commercial failure than Vista if addoption rates are anything to go by.
If it wasn't for newer games requiring DirectX 10 or 11, I suspect the only penetration of versions of Windows more recent than XP would be on machines that shipped with it pre-installed to people who don't know and don't care about the OS as long as the basic functionality is there. Except that most of those people have moved on to using Macs.
The majority of my video collection is on DVDs encoded at 25fps, but the odd few are 24fps and some are 29.97fps, and I have never seen an issue with 24fps, 25fps, 29.97fps playback on any of my Chromecasts.
I completely agree it's a non-issue. People really need to stop shining about imaginary problems with a $30 device.
Perhaps, but the those legally incapable for the sort of reasons they are facebook users in the first place are ideal advertising audiences. As they say, fools and their money are easily parted. It makes perfect logical sense.
"Austrian law says someone is legally incapable if he or she is underage or declared insane."
Seems legit. That covers the entirety of Facebook's membership.
Normally I would agree, but if you run big name CMS you are automatically exposed to all the exploits in it as and when they are discovered, and you will be probed for those along with every other site running that CMS.
If you have a site that is based on a home brewed CMS only used by you, it will most likely not bear the signatures of another commonly used CMS and the scanning bots will simply move on after a cursory glance. The only people who will bother to find obscure holes in your custom CMS are the people who are specifically after you, and if you have someone that determined to get you specifically, they will eventually succeed, but possibly still not as easily as by waiting with a finger on the trigger for another big name CMS exploit to be discovered.
The quote that comes to mind is:
Darth Vader: Perhaps you think you're being treated unfairly?
... that Google Glass hasn't gained huge popularity is the price tag. Yes, it's a cool gadget, but not cool enough to jump through beta program hoops and then spend $1000 on.