Social networking site MySpace has replaced traditional server/direct-attach disk combos with flash memory cached servers to save space, energy, cooling and cost. MySpace originally used multiple racks of 2U rackmount servers, with ten to twelve directly-attached 15,000rpm hard drives. They've been ripped-and-replaced with fewer …
Now where are they getting the revenue to pay for this is a completely different matter. Welcome to Web 2.0 - the return of the jolly dot.bomb
"they'll no longer suffer HDD failures"
No, they'll suffer flash drive failures instead.
Do they really think that the new drives are failure proof? If so, I'm looking forward to seeing the fallout when they do fail!!! Popcorn, anyone?
One thing that ires me is when people say that there are no moving parts to fail... but electrons do actually move (if only very slightly), and besides....
Just remember that you're standing on a planet that's evolving,
And revolving at 900 miles an hour,
That's orbiting at 19 miles a second, so it's reckoned
A sun that is the source of all our power
It's not like they are relevant anymore- They are soon to go the way of Tripod, Angelfire, Geocites et al (and Facebook & Twitter to follow soon after)
No more HDD failures....
But some SSD failures. I think current MTBF is 2 million hours - that's 0.9% probability of failure per annum. And also the low MTBF quoted by moving spindle drive manufacturers is not supported by the observed failure rates of mass users in the data centre (e.g. google).
Nice move though, interesting to see a large user go in that direction. But beware of anyone suggesting "these never fail".
First of many...
This is going to be the first of many such moves. At the moment many large-scale applications have to introduce lots of complex schemes of data partitioning and replication schemes to allow for scaling. This introduces lots of appleciation and data management complexities, duplication of instances, syncronisation and time mangement issues plus, of course, related environmental issues.
The improvement in I/O performance of these devices (approaching two orders of magnitude in latency reduction and IOP access densite) is going to payback many times over for volatile and transaction data, not only in the environmental requirements, but in simplifying applications, reducing support costs. That's before the improvements in response times that should also occur due to the much lower latency.
Welcome to the world of SSDs - that Crucial 256GB flash drive in my PC might be expensive, but it has revolutionised its usability.
Kinda depends how you use the drives.
If you stick them in a huge array in a SAN we found they tend to get eaten and fail very quickly, we literally couldn't change the drives fast enough in testing.
In a traditional server with 1 or 2 drives in a rack they should last a long time, and give good performance - in theory. Also will reduce cooling costs.
How do they measure MTBF to be longer than a product has been in existence?
Whilst it might be a statement of a company's confidence, which is more accurately measured by the warranty offerred, anyway, I can't see that it is really much more than a guess.
- JLaw, Kate Upton exposed in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- GCHQ protesters stick it to British spooks ... by drinking urine
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Facebook to let stalkers unearth buried posts with mobe search