13 posts • joined 23 Sep 2009
Yup, kills SAS drives
Saw this myself last year, we had a customer run a fire alarm test of some kind, fried a huge number of SAS drives. From our systems around 50% of the SAS drives were completely toast, unrecoverable. The SATA drives were fine, and I understand it was a similar story with other vendor systems too.
"Confusion among real people", well yes but if that's the only standard that matters we might as well give up now. I've seen real people fail to put floppy disks in drives, jam half a dozen CD- ROMs into a single drive, and even call their monitor a 'computer' and get confused when their work is still there after turning it off...
Totally ridiculous ruling, for once I'm actually rooting for Microsoft.
Isn't this just babyfood?
Would be interesting for a taste comparison to some of the more popular babymilk brands out there.
... or for the yet more cynical, buy some wholesale farming milk substitute that they feed to cattle, and see how similar this stuff is.
I doubt this guy has put huge amounts of work into testing and verifying the contents. It seems more likely he's just found a cunning way to make a quick buck selling milk substitutes to the gullible.
If my understanding of the introduction is correct, in a nutshell, it's adding extra parity protection. The way I *think* it works is this:
Imagine this is some data on your raid array, the vertical columns represent individual disks, the horizontal rows are stripes of data:
1 0 1 0 1
1 1 0 1 1
0 1 1 1 0
In RAID-5 or RAID-6, we protect against a disk failure by adding parity to the *horizontal* rows. This is good, it means we can protect against one or two total disk failures using these schemes, and for RAID-6, we can correct individual read errors even if we have one disk failed.
What the author is saying is that these schemes still can't cope with multiple read errors at the same addresses. So if we have a disk failure and two more read errors, we could lose data. Using our same block of data, this is what we've lost:
1 0 1 E 1
E E 0 E 1
0 1 1 E 0
Disk 4 has failed (the vertical line of errors), and we've also had read errors from two other disks. Due to some really bad luck, both of those errors are within the same horizontal stripe.
Now this is pretty rare, and we're talking lightning bolt strike while winning the lottery kind of rare here, but the paper seems to be saying that by using both vertical *and* horizontal parity, you can increase the level of protection.
From the above example, you can use vertical parity to correct the individual bit errors:
1 0 1 E 1
1 1 0 E 1
0 1 1 E 0
And now it's a straightforward job of using the 'normal' parity to complete the rebuild. Nice idea, not sure how much benefit it adds vs simply adding more traditional parity, but it's interesting at least.
At least google only throw out features, MS threw out my accounts!
I'd never go back to using any MS online account. We created a Live account some time back for work as it was required to use their online tools to track licenses. Imagine our pleasure when we tried to login the next year to renew the licenses when we found they had removed the account due to inactivity.
Yes, we didn't use it, but they were enforcing this as the tool to use for license management, that's not exactly something you need to do regularly.
There was no notification of their plans, and no way to recover the licensing details. We quickly scrapped their "recommended" approach and reverted to our existing manual system.
Deliberate breaking of DR-DOS a myth?
A myth? Groklaw says different. This is just one except of many from it's publishing of the court cases involved:
"FTC investigators also concluded that in order to sabotage DR DOS, Microsoft had carefully written and hidden a batch of code into tens of thousands of beta copies of Windows 3.1 that were sent to expert computer users in December 1991. When someone tried to run one of these Windows 3.1 beta copies on a PC using DR DOS (or any other non-MS-DOS operating system), the screen would display the following message: "Nonfatal error detected: error 4D53 (Please contact Windows 3.1 beta support.) Press C to continue."
To expert beta-testers using DR DOS with Windows, this message would convey that they could continue using the program, but it might cause problems. The effect would be to deter some from using DR DOS further; others would call Microsoft for an explanation of the supposed risks of using DR DOS."
Have they fixed IPMI yet?
On previous Supermicro chassis it was unworkable:
- The client application supports up to Java v1.6.19 ONLY, later versions of Java will cause it to crash if you attempt such craziness such as mounting an ISO.
- Curiously for a remote access solution, rebooting the box disconnects you... so good luck getting to the BIOS.
- If the host OS decides to disable the NIC, that also takes your IPMI port down. Another useful and well designed feature.
- Early versions completely disabled the dedicated IPMI port if it didn't detect a network connection on startup. So after a power outage if your switch takes longer to boot than the 0.5s the IPMI NIC requires, you will have no remote access until you re-power the server. I believe this one may be fixed now but I've pretty much given up on Supermicro IPMI by this point so haven't tested this myself.
Killed Kenai? Thank god
That thing was bloody awful. Why on earth do all these tech companies think it's a good idea to recreate forum software from scratch every time?
Sun's main forums are pretty awful (but somewhat understandable since they're trying to maintain compatibility with the original mailling lists), Kenai was just a nightmare. I had to register on it for the now defunct Sun xVM Server project, and actively avoided using the site unless I had to.
There are dozens of really good, tried and tested forum programs out there. Creating a new one just diverts developer time from more useful features.
Googles fault, or Microsofts?
While Google will obviously need to patch this bug, you have to wonder about the security of a browser where a bug in any add-in can cause "high risk" vulnerabilities in the browser (to use Microsofts own description).
If you are going to allow plugins, they need to be isolated and sandboxed, to guard against both bugs like this and malicious plugins.
As far as I'm concerned, the louder Microsoft shout about problems with the Google plugin, the more they emphasize the problems with their browser.
Bye bye data...
Windows Home Server
now why the fuck
would I want that?
$1.25 *BILLION* for something they didn't do?
Pull the other one Intel, it's got bells on.
'most apps are web delivered'
WTF? We've got over 100 apps in use here, do you know how many of those are web delivered: 1.
Citrix have a huge market. I mean, even if we forget Office, Outlook, etc, you still have CAD software, image viewing, specialist apps, accounting software, none of which are web based. You seem to be confusing what's theoretically possible with what actually exists in the real world in the majority of companies.
PCI-e SSD's anyone?
It's not the 1 million IOPS I find interesting here, it's the fact that it mentions the drives are connected via a PCI-e expander.
That implies that Intel are developing a high end PCI-e based SSD, something to compete with Fusion-IO perhaps?