Diamond Point International has sold two Violin Memory arrays, totalling 80TB of capacity, to a mystery European rail service provider. This customer – no one will say who – is moving from Dell EqualLogic iSCSI and HDS Fibre Channel disk arrays to all-flash arrays. A person familiar with the deal said: "[I] think it is the first …
So thats' where our record breaqking fare rises are going
Awesome though it might be to have train departure indicators that work reliably or a timetable and ticketing service that could advise you of the best avaialble deal without you needing to search all the permutations the money would be better spent (in hy humble ...) on my beer and fags.
Nah, It'll be deutche bahn I expect. Our network it still using punch tape, surely ?
Clay tablets, surely?
Naa.... I remember working on BR's TOPS system. That was 96col punch cards. Tape went out ages ago....
no title required
Ceci n'est pas un title ...
Chances are the 'mystery customer' is DB Schenker/EuroCargoRail/EWS/WhateverThey'reCallingThemselvesThisWeek - I imagine that tracking freight consists and the like across pretty much all of Western Europe is a bit storage intensive.
Mine's the anorak, ta.
what an utter waste of money.
Still we make apples play like pears ...
Why do we insist on making perfectly efficient NAND memory pretend to have cylinders, sectors and speak SCSI?
Not convinced a train company is the best early adopter for this technology.
SCSI does not speak cylinders & tracks
SCSI and its descendants have always been linear sector based. It's that mess IDE/ATA/PCBIOS etc you're thinking of.
definately not Belgian railways either...
... they're apparantly still figuring out how to make their trains run on time-ish, or how to use a time telling piece accurately more likely
80TB - Is that all?
If 80TB is all the server storage that a company has, then this is not one of the Titans of European transport, especially when that must be covering disaster recovery, volume testing, data warehousing and the like.
you've never talked to the guys over at Violin, have you?
An 80T Violin system is pretty massively expensive. Their stuff is in the category of "we don't list prices, because if you have to ask, you can't afford it." Generally you don't put all your data on it, just your active data set (what you need to be fast).
You DEFINITELY wouldn't put silliness like disaster recovery, and volume testing on it. (that would be kept on a more-conventional array)
Read the article...
The article specifically states that the DR environment is on the Violin hardware. Indeed, if you need SSD performance in production, then you'll need it on the DR site too as you will for any full scale volume and performance testing. The article also states that the company is now 100% SSD and 40TB to cover all production uses (ie just the prod site) is modest.
power and space-related cost-savings
"...in a lot less physical space, meaning power and space-related cost-savings."
Yeah, so if it really does last for 1000 years, then the cost savings might even out :)
Doesn't flash memory still have a write life that is magnitudes less than traditional Hard Drives?
Not to mention the other advantage of Hard Drive storage - if it breaks, swap it out. Don't need to replace the whole 80TB in one go!
This is the same problem with these people who think boxes that perform many functions are a great idea... a TV with freeview built in and a DVD player is a very BAD idea - if one unit breaks when you send it for repair you have just lost your DVD, Freeview and TV all in one go. If they were separate then when the DVD player goes - at least you still have a TV and Freeview while it is getting repaired.
A little research goes a long way...
Are you going for some ill-informed comment of the year award? Apart from the little issues of this being two 40TB arrays, not a single 80TB one, then what on earth makes you think that a failure requires the replacement of the entire flash array? A tiny little bit of research would show you that these arrays, like those for HDDs, are redundant and support hot-swap, so that the failure of any one storage module will require just that one to be replaced.
As for write endurance, it depends on what your write duty cycle will be. However, HDDs don't last for ever either. Indeed they suffer wear-and-tear just spinning and seeking, whether writing or reading. However, that's what companies take out maintenance contracts for - replacing failed components as required. Flash-based storage is no different.
Repair or bin ?
Why pay for 3 boxes, each with their own power supplies, leads etc, when one combination can be bought for far less than the separate items.
Do you know what it costs repair a small electrical item? £50 minimum at PC World, just to look at it. If the repair requires a part that's over 3 years old - forget it.
Moving from Dell/EQ and HDS to this SSD array? That makes no sense.
You don't just suddenly decide that your departmental grade iSCSI cheap-o box isn't cutting it and buy a decent chunk of SSD. Equally you don't take a look at your rock-solid HDS frame and chuck it in favour of a relative newcomer.
What is the NEW use-case that has presented itself.
Write Life - the Elephant in the Room... Questions left unanswered...
The write cycles of flash are indeed limited.
The question is: how easy is it to identify the chip & card, yank out a live board, replace the chip, and plug the board back in - while the system is running...
If the write cycles are about the same across all the flash, I would suspect it would run GREAT for a period of time, and then start experiencing simultaneous failures across the chassis.
Another question might be: how many simultaneous boards can one pull (live?) to replace bad chips, when they all start to fail at the same time...
Does Violin have a solution for these basic 6 grade grammar school questions or is this just a government purchase decision where the answer is "no"?
Flash is awesome technology, when used for medium-term storage of high-cycle write loads. I wish the author of this article would have addressed these basic concerns in the article - this read more like a press release from a vendor than a piece of tech news with a discerning writer.
If you had experience with enterprise hardware
You'd know that they have little LEDs on them to identify which one has failed.
Wear levelling algorithms are a teensy bit more clever than to make everything approach failure equally.
Aaaand, if the entire thing does fail, lo and behold! There's another one in a different location!
Flash write life
The flash trolls are getting predictable, and boring.
We are not talking about a single flash chip in your embedded system, these array have 1000's of die, and 1,000,000's of blocks, and are not bothered in the slightest by a few of them failing. No one rewrites a 40TB array once an hour, which is the sort of load you would need to be talking about to even approach actually wearing out a flash array.
If you actually wanted to know the answers to your questions you would be out reading interesting conference papers instead of posting here. So why don't you take your FUD and move along,
So you only wrote your backup once?
Seems to me that box in the other room is going to have about 100% of the writes as the box you think its supposed to back up.
Or am I incorrect?
letters and/or digits
The array itself is plenty redundant/unlikely to corrupt. Buying two arrays is just for show - "What if our Northbrook station blows up", etc. Or power outages.
"better than a disk-based SAN in terms of reliability" - but is it better when you DO have to replace a "disc" - what are the chances of losing two at once, how long does it take to get hold of a replacement unit, and/or how many units does it take to break any RAID-style protection of the entire device?
Also, how are they synchronising 40TB of data to a remote data-store?
Anyone know what the MTTF distribution looks like across SSDs? If it's fairly tight, then once they start to fail you're going to have a pretty busy IT team as they rip and replace a cascade of dying boards. As well as a significantly less busy back office waiting for the volumes to rebuild.
i'd say the main issue is firmware problems. Even rock steady intels have firmware issues. Get them close to capacity and some bog down faster than modern SUVs on a field.
If you needed to know, you'd know...
"Also, how are they synchronising 40TB of data to a remote data-store?"
As the article says, they are using FalconStor to front-end the arrays. FalconStor have a range of SAN virtualisation and VTL products than hook up to a wide variety of storage technologies; once set up they cn be managed through a single console/GUI a lot more easily than knowing several complex/obscure command sets for different arrays.
The FS products can take care of things like snapshotting/rollbacks, replication to remote/different arrays over synchronous (i.e dark fibre) and asynchronous (i.e. WAN) links. Frequency of replication can be tuned to the needs of the application and the available bandwidth on a LUN by LUN basis.
FS products are also white-labelled to many storage vendors (including EMC if I recall rightly), who use them to perform minimal-downtime migrations when replacing SANS for upgrades or vendor swap-outs (like this one).
(Satisfied customer speaks).
Have any of you nay-sayers ever used a RAID array?
RAID can be set up with multiple mirrors as well as redundancy, so if you have a failed array or even two (or how much cash do you want to spend?) your mirrors continue service.
I need help
I had this image of hobos and bluesmen from the 20s and 30s riding the blinds and using the train's wifi ...
Is it beer:30 yet?
80 TB Flash
Well you know the Swiss would never buy something that has holes in it ...
Really sounds like the brits to me.
"interesting conference papers" - when was that??????????
I heard that Network Rail has a Fondle Slab solution!
They are buying up ever tOuchPad they can find and are going to create a Beowulf cluster of them.
Hmm Railway system and new technologies
Love to see the safety case on this one. I suspect no Safety case advisor would accept such new technology on a safety critical system. So is this a ticketing system and back office only which would require no intrinsic safety systems.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- Analysis Spam and the Byzantine Empire: How Bitcoin tech REALLY works
- Apple cored: Samsung sells 10 million Galaxy S4 in a month