back to article Forrester says it's time to give up on physical storage arrays

The storage industry knows that the market for physical arrays is in decline. Cheap cloud storage and virtual arrays have emerged as cheaper and often just-as-useful alternatives, making it harder to justify the cost of a dedicated array for many applications. Forrester knows this, too: one of its analysts, Henry Baltazar, …

  1. Nate Amsden

    cheap is relative

    What is cheap x86 servers? My fairly loaded HP DL380Gen8s (24 core/48 thread 192GB 4x10G 2x8G FC w/4 hour onsite support) and vSphere enterprise plus are around $30k each. The cost hasn't changed too much since we bought our DL385 G7s 3 years ago (other than the systems are about 2x faster than our 3 year old boxes in the same power envelope)

    Maybe to some that is cheap, maybe to others it is not. To me it's reasonable with the ability to get good consolidation ratios out of them, and they have been *extremely* reliable.

    Though HP 3PAR has done pretty amazingly well at being a low cost leader in all flash systems recently(relative to their main competition anyway)

    1. Anonymous Coward
      Anonymous Coward

      Re: cheap is relative

      Your server price hasn't changed because your HP reseller knows that you're going to stick with HP servers and they are not offering you a competitive price.

      Try looking at a Dell or Supermicro option of the same thing and watch that price come down considerably.

    2. Anonymous Coward
      Anonymous Coward

      Re: cheap is relative

      It's cheap in comparison to Unix servers. Similar performance in the HP_UX or AIX category would run you into six figures.

  2. Anonymous Coward
    Anonymous Coward

    Buy like Google

    To get cheap servers, you have to do a Google and buy from Chinese makers through low-markup middlemen. Increasingly, the middlemen are getting cut out, too, and wea are starting to see big ODMs with a US presence. If you aren't too adventurous, try SuperMicro. They have a good business and support team in place selling rice-boxes already.

  3. Jim O'Reilly
    Holmes

    All-flash arrays need Plan B

    This is clearly a major trend, and unstoppable. It doesn't work for all-flash arrays, since the flash modules are proprietary to save cost and space, and to increase performance.

    Note that most servers have just a few drive slots. This doesn't work well for today's HDDs, since the idea is to have a lot (60 drives???) of bulk SATA as the secondary storage behind a flash primary tier.

    More detail is needed or this whole idea will rebound!

  4. Tom Chiverton 1

    Where else can I put my shit that's controlled by me then ?

  5. Ole Juul

    The new physics

    The time has therefore come to recognize that arrays are expensive and inflexible, Baltazar says, and make the jump to virtual arrays for future storage purchases.

    Fancy words for outsource and off site.

    Storage-watchers will know that Baltazar's post doesn't really say anything startlingly new. But the fact he's saying it at all, and saying it so bluntly, is surely notable.

    The very fact that he thinks that the internet is reliable enough is what's notable.

    1. justincormack

      surely?

      You could read the article, he says nothing about outsource and offsite. Software. You know, that stuff you run on your servers. He is suggesting you get servers with storage and run software on them rather than buy hardware and software packaged up togerther.

    2. Pascal Monett Silver badge

      Re: The new physics

      The Internet is reliable.

      It's the connection to it that is not, either on your side, or on the side of your service provider (not your ISP, I mean the server providing the service you want to use).

  6. Henry Wertz 1 Gold badge

    Well..

    Well, when I looked at the specs for Sanbolic, for example, it could use EMC and a few other storage arrays; it supports "cloud storage" (no I would not use this either...), and it supports flash and whatever disks you throw into the systems. They describe this setup as "virtual RAID", whether it's disk-level like RAID or file or block level, or uses it's own distributed file system, I don't know. It looks like these setups do all push using a pool of local disks for storage.

    I have noticed more modern servers no longer have the space to stick a good 5 or 6 disks into it, but as far as I know storage chassis are still on the market, so you can hook plenty of disks up to each server if you want. Of course if your usage is extremely storage-heavy (compared to number of servers) you really won't want to do this. It's definitely workload dependent.

  7. Anonymous Coward
    Anonymous Coward

    Data needs to be local

    How can you move 1000TB of data around? The storage needs to be local to where it's being used. Increasingly, the data is coming in from the cloud. What happens in the cloud stays in the cloud(R).

  8. Anonymous Coward
    Anonymous Coward

    As the world turns...

    First it's proprietary, then it's "whitebox", and swaps back at the next iteration when someone comes up with one, or more, proprietary solutions. Reminds me of the dialectic, which isn't necessarily a bad model.

  9. Will Godfrey Silver badge
    Thumb Down

    Azimov's hyperspace?

    Unless some has found a way to utilise this, there will always be a need for physical storage somewhere. Personally I'd rather have it where I can reach it... quickly.

    1. Tom 13

      Re: I'd rather have it where I can reach it

      Yet even now there are some bright lights out there who don't understand that.

      I use to help with a 25,000 (35,000 these days I am told) person convention on the east coast of the US. When I handled their registration system, everything was onsite. We'd paid someone to write custom software to handle Registration, retail sales, and an Art Auction. They decided they wanted to move to something they'd written. They also opted to consolidate our outsourced pre-registration system into it. Now, while the pre-reg system did live in the cloud while people were sending money, when online registration closed the contract said they would create a backup for us to restore to a server on site, which we did. Then the cloud copy became our backup in case anything bad happened to the server at con. While consolidating wasn't necessarily a bad choice they also opted to move everything to the cloud. Yeah. So registration day arrived this year and the facility was having trouble with their internet connection (convention rented their T1 line when I was running things). So almost none of the 8,000-10,000 people who were standing in line the first day (Thursday) got registered.

  10. Dapprman

    Until workmen outside cut through your comms cable ......

    It can and does happen (Power cable for one company I worked for, water mains for another).

    We hear all about the resilience built up at the other end to near guaranty your data, but there's always single points of failure much closer to home.

    1. The_Idiot

      Re: Until workmen outside cut through your comms cable ......

      <

      We hear all about the resilience built up at the other end to near guaranty your data,

      >

      Hear about them - yes. See them in real operation, on too many occasions (for me at least, but I'm an Idiot) - definitely no. Otherwise we wouldn't hear of any service provider side cloud access failures. Wasn't that part of the logic?

    2. Tom 13

      Re: Until workmen outside cut through your comms cable ......

      Yep. I still remember the day somebody running a backhoe down by the police station took out the power grid to the industrial complex in which I worked. Somehow or another the guy running the backhoe managed to get out unharmed. Funniest part was when one of the secretaries decided that since the computers were down, she'd put her old secretarial skills to use on a typewriter. When she sat down at the the IBM Selectric she suddenly realized it also had a power button.

  11. Anonymous Coward
    Anonymous Coward

    All About The Workflows

    It did make me wonder who has contracts with Forrester.

    I recently heard a senior VP of a big player say "if you opt for the virtual version you don't have to think about infrastructure". Wrong, wrong, wrong! Just because it's virtualised does not remove the need to consider the infrastructure needs of the combined workflows; forgetting that just leads to world of pain.

    Virtual storage, like virtual everything else, is progress, particularly for small scale and hyper scale, and to enable hybrid deployments, but for a significant part of the stuff in the middle (and some hyper scale) then storage arrays will be with us for some time (> 5 years).

  12. moonraker

    How about RedHat/Inktank Ceph

    Why research does not mention about Ceph ? It is a rock-solid solution, based on shared-nothing architecture (RAIN). This is a concrete example of distributed block + object storage solution which really does provide enterprise-level solution.

  13. RangerRick

    Increasingly, it's about whether to own any hardware at all in the cloud

    The storage array vs. commodity server / virtual array has been underway for some time and is certainly accelerating. However, the conversation is increasingly not just about legacy storage array vs. commodity servers w/ storage software - it's hardware ownership vs. IaaS rental in the cloud.

    At SoftNAS, we are increasingly seeing companies choosing to move away from the legacy storage array and the premise or colo data center into a pure cloud configuration, where the virtual storage appliance in the cloud is the answer for mission-critical storage. When one examines the costs of just renewing a 3 to 5 year maintenance contract on these expensive arrays, it's easy to see a path to move entirely into the cloud rental model instead of paying maintenance and ongoing data center operational expenses.

    So there are multiple paths forward - not just which server to buy, but whether to buy one at all, and whether to continue owning the hardware or let someone else own those costs and headaches and just rent the IT infrastructure instead.

    1. Pascal Monett Silver badge

      Sure, these days it's sexy to go to the cloud, we got the memo.

      When enough companies have been burnt by connections failing at the worst possible moment, or providers on the fritz for days on end, or simply disappearing from one day to the next, you'll see a move backwards and enterprise storage on premise will have a resurgence.

      Nothing new under the sun. We started IT with mainframes and dumb terminals, then we got PCs and distributed computing, then we returned to central servers (but not mainframes). Now, the Internet is driving us back to dumb terminals, and the cloud will supplement that with virtual storage.

      Looks like we're going back to mainframe days, but these days it's called cloud.

      Someday, we'll backpedal on that too. It's inevitable.

  14. Night Owl

    What rubbish. I assume Forester has a fat research contract from one or more "virtual" array companies to promote the idea of virtual arrays.

    What current architecture array is NOT an entirely software-defined storage product today ? The only one I can think of is 3PAR, with it's ball-and-chain custom ASIC architecture (where the next version of the product is guaranteed to make yours obsolete, because you won't have the new ASIC in it). They are almost all running on one of the same 2-3 ODM storage platforms. What are you paying for ? The software. The development and testing of the features in that software costs money, and that's what you are really paying for.

    If anything, software-defined storage and virtual arrays are going to cost more, because of the much greater testing and integration work with multiple hardware platforms that must occur for them to be "enterprise" grade.

    And do you want to deal with all the finger pointing when something goes wrong ? It's the same reason don't run open source on anything misson critical - when I am having problems, I want someone on the other end of the phone whose job is dependant on this product working. Not someone who is going to tell me that "it should be working" and I should "talk to my server vendor".

    1. hoola Silver badge

      True, much of it is a con.....

      Almost all of these high performance virtualised arrays, regardless of whether they have the fairy dust of "FLASH" stuck in them have one major problem:

      They are all totally rubbish at high rates of sequential transactions. All of the designers have ultimately ended up with similar solutions and it does not matter how much x86 hardware you put behind it, they do not work.

      The "FLASH" elements makes it worse as they are sold with the promise that SSDs will make it go faster. They do up to a point but it is still the same SAS channels and interfaces behind it. What a virtualised array does is maximise profit for the manufacturer by allowing standard components to be shipped with a bit of software.

    2. Anonymous Coward
      Anonymous Coward

      Pardon? Nobody runs open source software on mission critical equipment?

      Have you been living under a rock, or do the words "Oracle linux" or "Redhat linux" mean different things in your parallel universe? Open source software that you can pay for paid support from? Or are you just blinkered hoping desperately that you keep the tide of progress at bay.

      Ironically most of the traditional SAN's use some sort of *bsd or linux under the hood. You might have seen the product gui's with the branding on, but I've had my fingers in the innards tinkering around.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like