* Posts by myxiplx2

25 posts • joined 23 Sep 2009

UK.gov plans £2,500 fines for kids flying toy drones within 3 MILES of airports


Knee jerk reaction

Typical knee jerk reaction, making lawbreakers of kids and hobbyists while doing absolutely nothing to address the problem.

The issue isn't law, what the drones over Heathrow did is already illegal. The issue is that neither law enforcement nor the airports have any effective way to enforce the law.

If they even had drone operators on the payroll of the police and airport they could at least follow these drones as they returned home, get video footage of the owners, and follow them as ground police are directed to the location. And an industrial drone with a net has been shown to be an effective way to catch and disable a drone.

Focus needs to be on equipping the airports and law enforcement with the appropriate tools, not knee jerk and ineffective policy changes.

A few reasons why cops haven't immediately shot down London Gatwick airport drone menace


Airports need drones

To be honest, the airline probably needs its own drone operators to counter this. Another drone to follow it back to base and capture video of anybody there would be a good start.

And the most effective way to take it down it another drone trailing a lightweight net underneath it. Tangle the props and it'll be down in seconds.

Fake prudes: Catholic uni AI bot taught to daub bikinis on naked chicks



Looking at the comments I see a lot of "wasted education", and "why do this", and "boo, censorship" type remarks.

Which to me rather misses the utterly ingenious nature of this project.

A bunch of lads in their late teens & early twenties in a *Catholic* Uni have found not only an officially approved reason for downloading over 2,000 nudes, but they actually got it to count towards their grades too!

Genius, bloody genius. Those lads will go far!

Noise from blast of gas destroys Digiplex data depot disk drives


Yup, seen this before

About 7 years ago I was working as a support engineer for a storage company when we had a call from a client who'd run into this exact issue.

About half of the 15k drives in their datacentre had utterly failed after a fire suppression test, affecting a wide variety of manufacturers systems. There was too much data loss for any attempt at recovery and it was the IBM engineering team who were the first to identify the root cause.

7k and 10k drives were unaffected, it seemed the particular frequency of the nozzles at this site managed to hit the resonant frequency of 15k drives, causing the heads to physically impact the platters, utterly destroying the drives and data beyond any reasonable hope of recovery.

Cue one DR program invitation, and a very sizeable insurance claim!

Real talk: Why are you hanging on to that non-performant disk?


So many things wrong with this. Sure, we'd all love flash storage everywhere, but when you have a limited budget, or huge requirements it just isn't necessary.

I've worked on storage refresh projects for schools where the peak requirement is 500 IOPS. They don't need to spend £50k+ on flash storage that delivers 100,000 IOPS. Sure disk is slower but for them, its still fit for purpose.

Other clients have needed multiple petabytes of storage for archival projects. This data will be read once in a blue moon so the performance requirements are minimal, and the software managing the archive doesn't care if it has to wait for the data. Heck, most of the time it's pulling data from disks that are spun down (30s latency) or tape libraries (tens of minutes of latency). An all flash solution for these customers would be eye wateringly expensive. Quite literally 10's of millions more to purchase, and with far higher maintenance and running costs too.

The comment about deduplication was also terrible. Whichever VMware bod suggested to run both was basically clueless. The benefit of running two deduplication engines in series like that is generally negligible, but the performance overhead, memory hit, and latency penalty is going to be very apparent.

If you have centralised storage with dedupe and compression then use that. It will have controllers designed for the purpose. And that allows you to turn off these features in your hypervisors, improving compute efficiency and allowing you to run more VMS per host, with better performance, better density, lower license costs and lower running costs.

Running dedupe twice is utterly ridiculous.

Spun-out Nexsan now prowling the market for growth and acquisitions


Good to see Nexsan still going

I may be a little biased, but to this day their Boy/Beast and E-Series line is one of the most bomb proof storage systems I've worked on. Rock solid, and able to recover from outages that would floor many other products.

There's some sound engineering behind their systems, and it's good to see them still soldiering on!

Google makes it to third base with Home digital assistant


Same wake word for all devices, really?

So Google are using the phrase "ok Google" for this too. So now when I try to use Home it's going to wake up my phone and tablet too?

Somehow I don't think they've properly thought this through.

'Jet blast' noise KOs ING bank's spinning rust servers


Known problem, but not well documented

I've had a customer affected by this in the past, if I recall it's down to the frequencies emitted by one specific design of nozzle, and there are advisories out there for the fire suppression systems advising against its use in data centres.

The outage we saw affected 15k rpm drives, across several different SAN equipment manufacturers. 10k and 7k drives escaped largely unscathed.

However the damage was severe enough to totally offline around 50% of those 15k rpm drives, to the point that they wouldnt even come back after a power cycle.

'Limitless enterprise storage'. Really? Digging deeper into Symbolic IO


Full system de-dupe?

There's a lot of bullshit in the patent, but isn't that always the case? Waffle about a whole bunch of magical things that it might be able to do?

It reads to me as though it's de-dupe, but applied all the way through the system, from CPU cache, RAM, nvram, all the way down. And it seems they're implying that this can give enormous improvements in data storage and throughput, but with the caveat that this will depend on the particular data in question (which is par for the course with de-dupe).

In memory de-dupe would of course be interesting to suppliers of expensive nvram hardware, so that could explain the links there.

Disaggregated hyperconvergence thinks storage outside the box


Gridstore already does this

Gridstore can already do this, it's pretty low overhead in the first place (well under 3% overhead in an intensive test), so its not often needed, but you can also install the client on external systems and allow them to access the storage of the hyper-converged system.

It's a standard feature of the architecture, and you still get all the benefits you would with a normal hyper-converged system, including simplified management, scalability, and end-to-end QoS.

Microsoft's Lync becomes 'Skype for Business'



Do they seriously expect businesses to be happy with people's grandmas calling in via Skype during corporate meetings?

They're going to need to be very careful how they handle this if they want to avoid pissing off businesses and confusing users, with a very clear distinction between Skype for Business, and Skype.

But given Microsoft's track record, this will be driven by marketing, and they'll attempt to ignore these problems and wind up making a total hash of both products in the process.

Win a year’s supply of chocolate (no tech knowledge required)


Easy IT angle, previous research has shown that scientists eat more chocolate than the normal population. Obvious conclusion is that clever folks eat more chocolate, and anybody who's ever worked a Helpdesk knows that IT folks are brighter than the general population.

Therefore IT folks love chocolate. Simples!

Thundering gas destroys disks during data centre incident


Yup, kills SAS drives

Saw this myself last year, we had a customer run a fire alarm test of some kind, fried a huge number of SAS drives. From our systems around 50% of the SAS drives were completely toast, unrecoverable. The SATA drives were fine, and I understand it was a similar story with other vendor systems too.

Judge nixes Microsoft SkyDrive name in BSkyB court ruling



"Confusion among real people", well yes but if that's the only standard that matters we might as well give up now. I've seen real people fail to put floppy disks in drives, jam half a dozen CD- ROMs into a single drive, and even call their monitor a 'computer' and get confused when their work is still there after turning it off...

Totally ridiculous ruling, for once I'm actually rooting for Microsoft.

Reg hack prepares to live off wondergloop Soylent


Isn't this just babyfood?

Would be interesting for a taste comparison to some of the more popular babymilk brands out there.

... or for the yet more cynical, buy some wholesale farming milk substitute that they feed to cattle, and see how similar this stuff is.

I doubt this guy has put huge amounts of work into testing and verifying the contents. It seems more likely he's just found a cunning way to make a quick buck selling milk substitutes to the gullible.

Reg boffins: Help us answer this Big Blue RAID data recovery poser


Simple explanation

If my understanding of the introduction is correct, in a nutshell, it's adding extra parity protection. The way I *think* it works is this:

Imagine this is some data on your raid array, the vertical columns represent individual disks, the horizontal rows are stripes of data:

1 0 1 0 1

1 1 0 1 1

0 1 1 1 0

In RAID-5 or RAID-6, we protect against a disk failure by adding parity to the *horizontal* rows. This is good, it means we can protect against one or two total disk failures using these schemes, and for RAID-6, we can correct individual read errors even if we have one disk failed.

What the author is saying is that these schemes still can't cope with multiple read errors at the same addresses. So if we have a disk failure and two more read errors, we could lose data. Using our same block of data, this is what we've lost:

1 0 1 E 1

E E 0 E 1

0 1 1 E 0

Disk 4 has failed (the vertical line of errors), and we've also had read errors from two other disks. Due to some really bad luck, both of those errors are within the same horizontal stripe.

Now this is pretty rare, and we're talking lightning bolt strike while winning the lottery kind of rare here, but the paper seems to be saying that by using both vertical *and* horizontal parity, you can increase the level of protection.

From the above example, you can use vertical parity to correct the individual bit errors:

1 0 1 E 1

1 1 0 E 1

0 1 1 E 0

And now it's a straightforward job of using the 'normal' parity to complete the rebuild. Nice idea, not sure how much benefit it adds vs simply adding more traditional parity, but it's interesting at least.

Microsoft 'surprised' by Google Gmail 'winter cleaning'


At least google only throw out features, MS threw out my accounts!

I'd never go back to using any MS online account. We created a Live account some time back for work as it was required to use their online tools to track licenses. Imagine our pleasure when we tried to login the next year to renew the licenses when we found they had removed the account due to inactivity.

Yes, we didn't use it, but they were enforcing this as the tool to use for license management, that's not exactly something you need to do regularly.

There was no notification of their plans, and no way to recover the licensing details. We quickly scrapped their "recommended" approach and reverted to our existing manual system.

Where were the bullet holes on OS/2's corpse? Its head ... or foot?


Deliberate breaking of DR-DOS a myth?

A myth? Groklaw says different. This is just one except of many from it's publishing of the court cases involved:

Source: http://www.groklaw.net/articlebasic.php?story=20120711170909394

"FTC investigators also concluded that in order to sabotage DR DOS, Microsoft had carefully written and hidden a batch of code into tens of thousands of beta copies of Windows 3.1 that were sent to expert computer users in December 1991. When someone tried to run one of these Windows 3.1 beta copies on a PC using DR DOS (or any other non-MS-DOS operating system), the screen would display the following message: "Nonfatal error detected: error 4D53 (Please contact Windows 3.1 beta support.) Press C to continue."

To expert beta-testers using DR DOS with Windows, this message would convey that they could continue using the program, but it might cause problems. The effect would be to deter some from using DR DOS further; others would call Microsoft for an explanation of the supposed risks of using DR DOS."

Keep your Playboy mansion, Supermicro is my nerd vice palace


Have they fixed IPMI yet?

On previous Supermicro chassis it was unworkable:

- The client application supports up to Java v1.6.19 ONLY, later versions of Java will cause it to crash if you attempt such craziness such as mounting an ISO.

- Curiously for a remote access solution, rebooting the box disconnects you... so good luck getting to the BIOS.

- If the host OS decides to disable the NIC, that also takes your IPMI port down. Another useful and well designed feature.

- Early versions completely disabled the dedicated IPMI port if it didn't detect a network connection on startup. So after a power outage if your switch takes longer to boot than the 0.5s the IPMI NIC requires, you will have no remote access until you re-power the server. I believe this one may be fixed now but I've pretty much given up on Supermicro IPMI by this point so haven't tested this myself.

Oracle hands out love and handcuffs to Sunware


Killed Kenai? Thank god

That thing was bloody awful. Why on earth do all these tech companies think it's a good idea to recreate forum software from scratch every time?

Sun's main forums are pretty awful (but somewhat understandable since they're trying to maintain compatibility with the original mailling lists), Kenai was just a nightmare. I had to register on it for the now defunct Sun xVM Server project, and actively avoided using the site unless I had to.

There are dozens of really good, tried and tested forum programs out there. Creating a new one just diverts developer time from more useful features.

MS discovers flaw in Google plug-in for IE


Googles fault, or Microsofts?

While Google will obviously need to patch this bug, you have to wonder about the security of a browser where a bug in any add-in can cause "high risk" vulnerabilities in the browser (to use Microsofts own description).

If you are going to allow plugins, they need to be isolated and sandboxed, to guard against both bugs like this and malicious plugins.

As far as I'm concerned, the louder Microsoft shout about problems with the Google plugin, the more they emphasize the problems with their browser.

Write haiku, win home server


Bye bye data...

Windows Home Server

now why the fuck

would I want that?

Intel takes out $1.25bn insurance policy


$1.25 *BILLION* for something they didn't do?

Pull the other one Intel, it's got bells on.

Citrix delivers Swiss Army Knife desktop virtualization



'most apps are web delivered'

WTF? We've got over 100 apps in use here, do you know how many of those are web delivered: 1.

Citrix have a huge market. I mean, even if we forget Office, Outlook, etc, you still have CAD software, image viewing, specialist apps, accounting software, none of which are web based. You seem to be confusing what's theoretically possible with what actually exists in the real world in the majority of companies.

Intel squeezes one million IOPS from desktop


PCI-e SSD's anyone?

It's not the 1 million IOPS I find interesting here, it's the fact that it mentions the drives are connected via a PCI-e expander.

That implies that Intel are developing a high end PCI-e based SSD, something to compete with Fusion-IO perhaps?

Biting the hand that feeds IT © 1998–2019