back to article It's a boxless, super-flash, hyper-converged world: But what'll we do for BULK STORAGE?

Bulk secondary and tertiary data is going to the cloud. Not right now and not for everybody but the local ownership, operation and management of secondary data storage is costly and complicated and – slowly but surely – the cloud storage services are getting cheap enough and reliable enough to take over. CIOs will likely …

  1. Nate Amsden Silver badge

    all depends on access requirements

    sending data to an external cloud provider may look cheap, but if it means you can only push and pull data at a few megabytes/second because of the latency to the remote provider a lot of folks will likely keep stuff on site for performance, maybe encrypt+ship stuff to external cloud provider for stuff with really low access rates.

    An example I give people is the main data center for my org is in Atlanta with a 1 gigabit uplink to a tier 1 ISP. Transfer rates to S3's east region for a single stream connection taps out at about 5 megabytes/second(current test is tapped out at 3MB/sec). It seems S3's providers are filtering way upstream as my traceroute to them dies after a few hops and less than 1 millisecond. I want to say before this filtering it was about 15-20ms away.

  2. Scoular

    Oh so secure

    And give all your data to a government agency which has better encryption skills than your company and a way bigger budget. It also seems to have a really pressing need to know everything about everyone in the world. Not everyone trusts that agency.

  3. Jack of Shadows Silver badge


    Same old, same old. The tiers will be physically farther apart yet timing-wise just about the same. Otherwise you end up having to adjust it as Nate states. Oh, and TCO will invariably go up. It never seems to drop no matter how many whitepapers aver otherwise.

  4. Chris Mellor 1

    Bay configs wrong maybe

    Sent to me and posted here. I;m trying to find out from Storsimple what the actual HW configs are:-

    I think those disk capacities are wrong.

    The small unit has 10 bays and therefore 5 disk mirrors (assuming your analysis holds). 15TB would be 3TB disks, then. If it's 300GB drives, 1.5TB total storage. Either way - a correction needed I suspect.

    The larger unit potentially holds 20 x 4TB drives (10 mirrors @ 40TB) plus SSD.


  5. M Debelic

    Spot on analysis.

    Just think now many enterprise (mainly NAS) arrays are still being used in large corporations just for archiving documents.

  6. Andy Howarth

    Can't see this happening in proper corporate environments any time soon for two reasons:

    1. How many large organisations actually run their own infrastructure rather than outsourcing to HP, CSC, TSys etc? Most of these have no incentive to move data to the cloud

    2. How many large organisations know enough about their data to feel safe moving it to the cloud? With so little known about what is actually sat on all the disk arrays the safest option is to leave it where it is. Particularly in industries where they are regulatory or information protection considerations

    1. J. Cook Silver badge

      I work in one of those industries with a large amount of regulatory oversight/overhead.

      We have to run our own infrastructure, largely because of the regulatory oversight; the auditors and agency likes it when we can point out a particular server/storage shelf/device and say "yes sir/madam, the data for [app] resides right there." We at least do get the budget to run it at least, which makes a hardware geek such as myself happy to play with new shiny devices every couple years. :) However, we've heavily invested in what vmware likes calling a private cloud. we rarely add physical servers for an app; it's always another cluster node for the virtualized server space we have. Same with storage, although we are just starting to get into the multiple tiering, if only because we didn't really see a good reason to keep 5+ TB of user documents on super fast sas drives. :) (hell, if I could figure out a way to shoehorn in near-line storage or automated archiving of unused files to a tape or MO jukebox that our filer could talk to on demand, I'd do it in a heart beat.)

      As far as keeping data in an off-site cloud that's run by someone else? that's pretty far forward looking for our company. We did just get approval finally to keep our off-site backups for [heavily regulated application] on media other than tape, so that gives you an idea of just how slow it is to get things approved. (we've been using disk to disk and off-site replication for the non-regulated stuff for the better part of two years, and it's made life so very much nicer in that respect.)

  7. Dave 13

    Jumpin Jack Flash

    We've gone to all-flash on the tier-one storage (HP 7450) and backup to the cloud. Looked at cloud storage overall but response times were a bad joke. Speed of light will dictate what's useful and what isn't, not trendy CIO swarms flocking to the latest thing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019