back to article Easy rip'n'replace storage using cheap kit? Nooooo, wail vendors

Does your storage sales rep have a haunted look yet? If they work for one of the traditional vendors, they should be concerned about their long-term prospects. If I was working storage sales, I’d certainly be wondering about what my future holds. Of course, most salesmen look no further than the next quarter’s target, but …

COMMENTS

This topic is closed for new posts.

Adoption will be driven by customer need

I agree with almost everything you have stated here Martin. There is a definite case of SDS from traditional NAS/SAN through to scale out object storage (I think there may be a case for flash/hybrid arrays, where performance guarantees require dedicated platforms), but the speed of adoption will depend on a number of parameters.

1) End user/Integrator skill set

2) Knowledge of the market, options

3) Uses cases/ references

4) Time

I believe the complete abstraction, ie software on ANY platform, is a little way off yet. It'll require more general confidence in the software technology first. I see no issue with compatibility lists or validated commodity hardware delivered as part of the solution, as things stand. Look at the general move towards prevalidated technical stacks such as Vblock, flexpod, vspex etc, there is huge customer demand for those type of solutions, as it removes much of the complexity and operational cost when rolling out services. I thinks we need to get to this stage first in the storage market (for all the same reasons), before we start talking about storage software on ANY platform.

1
0
Anonymous Coward

Makes sense

Considering most storage systems have good, but not that special hardware, there is no reason why you can't use software to make commodity hardware do 99.9% of what vendors now have.

Some things, like having dual-ported HDD for higher availability and so on, are not a problem if you are running double parity protection with more cheap/standard HDD.

Yes, there will be issues of crap hardware / crap firmware / crap drivers, but today you can pay serious money for a storage system that gets bugger-all support where it matters. Recently our Oracle storage system needed a new motherboard, of the standard x86 sort, and they *downgraded* the BIOS/ILOM firmware to an older buggier one because they have not certified their own software on their own current hardware!

If the system was stable and reliable then that might just make sense, but it is not and other than hardware swaps, the support they provide for the software/firmware has been practically non-existent.

So, are other established storage vendors the same in dealing with known faults sluggishly?

Would a software-only vendor be any worse in terms of fixing their product?

1
0
Silver badge
Stop

Blame us customers.

"..... I am hearing instead of hardware compatibility lists for software and limited support....." Yeah, and there is a reason - us customers want guarantees that the product will work reliably, and we expect our software vendors - storage or otherwise - to find and fix the cause of any issues. We will push vendors for five-nines accreditation, at which point dodgy hardware from Joe Blogs Computers actually becomes a real hindrance. Yes, I can build a server out of components from a dozen different manufacturers, then run software cobbled together from a dozen FOSS and COTS sources, but then I'm largely on my own when it starts misbehaving or maybe just not performing as well as advertised. If I use a vendor-approved hardware stack then I can shift the onus of responsibility for finding the problem onto the software vendor. For the storage software vendor, limiting software to approved stacks is not just an hardware sales survival mechanism, it's a means of actually being able to support the product to the level we require. For the customer, it means that when it goes tits up at 2am on a Sunday I can actually get immediate and effective support because the vendor's support people know about the hardware I am using.

I'm all for open-source solutions, and the vendors that provide the most feature-rich storage software that can perform on as many varied hardware stacks as possible will be the ones that will triumph in the market, but blindly shrieking at the hardware vendors for not mindlessly supporting their software on any bit of junk out there? Come on, take a step back from the pulpit and think beyond the "corporate greed" schpiel, please.

10
0
Bronze badge
Angel

Re: Blame us customers.

's funny how it is 2am on Sunday when it goes south, innit. One of my chaps lost most of his Sat evening when a dodgy SAN attached to a *IX device went ultra wobbly.

I do agree with all (most?) of what you say, but as I have not been an end user customer for a while I take a slighly tainted view

I manage (thus instutionalised dumb) a big group of people who look after large infrastructure spread across multiple datacentres - and we are looking always to reduce our costs. As we gurantee the SLA the tech behind our gurantee is ours to select - and I can see us going down this route. But I do have the ability to send multiple people to go get good at this

So maybe my point is - if you pass your enterprise stuff over to third parties maybe this will creep in anyhow

I do not know, I'll report back, maybe

3
0
Silver badge
Thumb Up

Re: Blame us customers.

Exactly. We are in the same boat, we supply a product, which runs under Linux. But we only support it 24/7 if they use the Linux package versions we specifiy and on hardware we specify, because it is test with those versions and that configuration of hardware, so we know it works and if there are problems, we know where to look.

With virtual machines, the hardware is becoming less critical, but not irrelevant.

If the customer rings up a 2am on a Sunday (or 3am on Monday) morning and you then have to work out what hardware they are using and then hope that the vendor also has a 24/7 helpline and that they will respond in a timely manner.

When you control the hardware and the software stack, if that call comes in at 2am, you can deal with it, and hopefully, have the customer up and running again in a matter of minutes. If you have to spend a couple of hours in different telephone queueing systems, that ain't going to happen!

3
0

Re: Blame us customers.

The problem with commodity HW is that it does break even if the HW is on a compatibility list. Once diagnosed, now there is remediation. Who is going to pull the chassis out of the rack to replace a DIMM module? Which one do you replace? If the controller went down is it a FRU, i.e. can it be replaced while the system is hot? Or will the chassis need to be opened! Who does this? Who trained them? -- And if this is a LOM facility, is access granted through change management during business hours and will this service not threaten uptime of any other devices?

We used to test vendor storage systems by yanking components during a test load and making sure failover of components worked. I don't see these cheap, commodity systems getting the same vetting process before being placed into production.

I have seen a large financial company, as an early adopter to this venue, get burned. They certainly don't want to admit to this -- yet -- but, what was billed as a low-cost storage initiative got nick-named cheap storage with a very high maintenance price tag -- with experienced enterprise storage admins relegated to A+ screwdriver jockeys.

--Be very careful, here. I haven't even addressed evaluating the maturity and size of the open source community for the software of choice. Who is going to fix the panic that brought the system down? Is the author of the code still around? When will this expert be around to escalate your problem?

3
0
WTF?

Re: Blame us customers.

Surely it would be cheaper to have someone in-house build a DIY commodity system that he/she understands and get him out of bed at 2am once every blue moon?

What you are describing is fear.

Personally speaking I reckon I could put it together and look after it and I'll do it for £40k a year. And when I am not doing that (which won't be very often because I can set it up right) I will be good for at least £80k worth of value added on other things.

It's a false economy and a lock-in based on, well, FUD.

1
5
Thumb Up

Re: Blame us customers.

Yup! Though some up-start vendors are at least beginning to tackle this. Nexenta are a pretty good example - although they are the software people, they have a list of people who sell fully 'certified' solutions where you have one number to ring for support and a very detailed 26 page list of supported hardware components if you want to do it yourself.

Even with that though there is still a way to go generally in the market before most people would be willing to actually buy one. Would you trust your storage to one of these?

0
0
Silver badge

Re: Blame us customers.

Parts is another problem @tentimes, if you built it yourself today, if a components fails in a year, can you get the same component to replace it? Just look at Intel Atom mainboards, they are replaced every 6 months or so. That means buying a bucket load of extra components, in case something fails. You aren't going to get a replacement PSU at 2 in the monring, at least if you have an emergency response service from a proper tier 1 supplier, they will probably have a replacement on site in 4 hours at the outside, try doing that, if you are sourcing parts from Amazon and co.

For non-mission critical devices, this isn't so much of a problem, for mission critical, it isn't the price of the device, it is how long it will take to get replacement parts on site and fitted, if it takes you 6-8 hours (i.e. waiting for the nearest shop to open, driving there, driving back, then fitting the power supply), you have probably just cost your company the equivalent of a couple of dozen Tier 1 supplied server is lost productivity and waste on a production line.

For some of our customers, downtime in excess of 15 to 30 minutes costs them at lest 50K in raw product they have to throw away - and they have to pay bio-contamination disposal on top of that; that isn't including lost employee time.

6
0
Thumb Up

Re: Blame us customers.

Modularise and failover. I'm building one at the moment, very cheaply, redundant and self-healing. I don't see the issue. A part fails, replace it. Have spares all the time. No problem.

Honestly, the customer needs to be educated in this case. Soon you will have disruptive start-ups doing it at a fraction of the current price. The only thing stopping them right now is FUD.

0
2

Re: Blame us customers.

I hardly know where to start; yet again, people assume commodity means some random crap that we pull out of the cupboard and install random crap on. Some people might, most corporates won't; we might start looking at ODMs if we are large enough or we might take COTS from Dell, HP or who-ever our preferred server vendor is.

Storage vendors often take advantage of the commoditisation of components themselves; hard-disks being one of the prime examples. NetApp in past have had many slides about the overhead due to making drives from different vendors look the same-size.

But businesses already run on FOSS; more than 50% of the storage vendors out there run their storage software on FOSS. Some of them get clobbered by the same bugs.

To be quite honest; the permutations of hardware actually out there when you start looking at utilising white-box storage servers aren't that large. You have to go some to go really oft-piste. Most storage arrays with a few exceptions are pretty close to being off-the-shelf servers with the vendors badge on them.

I'm not actually expecting many people to build their own servers but there are many who want to use the same servers across their estates; be it for compute or storage nodes.

0
0
Silver badge
Pirate

Re: Blame us customers.

Having worked at big giant world domination computer company, I can tell you for a fact they are using, wait for it, commodity hardware and have been for a very long time.

Their only VAR contribution lies in testing before shipping and corporate level SLAs they will actually honor.

I sure as hell don't like waking up at 3am.

2
0
Anonymous Coward

Value Add has a space

Products like IBM's SVC may have a place - you could be right and plain and simple storage will go the way you indicate within a decade.

0
0
Pint

I agree, but....

I pretty much agree with everything said above regarding management of major systems and the reasoning behind the current model. However, as an end-user and IT manager, I have found that these large system structures for the smaller businesses are just too expensive for the benefits they deliver.

A case study-

We used to have a very expensive managed system on contract from an HP-based vendor. It never let us down and the support was excellent. However, the system only gave us around 16TB of storage, used a LOT of power. The costs a month were pushing a thousand pounds easily.

We now use two mirrored Synology systems at two sites that each have 130TB of storage each.

What we spent in one year was enough to set up the first system with AWS being the temporary online backup while the second system was being set up.

Synology servers (and others) are now commodity items in themselves. Studies have shown (Carnegie Melon, et al) that there is no significant benefit in terms if reliability to buying Enterprise HDDs and this further reduces costs.

Supermassive storage centres may yet continue for a long time, but the smaller companies using Open-Source OS's are bringing about a storage revolution that is going to impinge more and more on traditional storage systems.

A long time ago we reached the position where a removable HDD has greater storage capacity, is much cheaper and more reliable than tape storage systems.

That us true commoditisation at the root level - the User's level.

And speaking an End-User, I can now buy my storage systems practically off-the-shelf and have them installed and running the same day with no huge contracts or massive overheads.

This commoditisation is working for us and I cannot wait to see what the future brings.

If I missed the point, please excuse me, this is my first post.

Susi xx

6
0
Silver badge

And who do you cal...

if the whole thing goes belly-up at 3 in the morning and every minute of downtime is costing your business tens of thousands of Pounds?

There are places for these systems, I won't deny it, but there is also a place for standardised, "professional" solutions - mission critical systems, upon which the business relies for its production or other areas where any unplanned downtime can quickly run into hundreds of thousands or millions of pounds lost is not a place to spare a couple of quid with commodity systems with no 24/7 support.

Some departments can live with a couple of hours of unplanned downtime and not lose the company bucket fulls of money, here having redundant commodity systems is a perfectly valid option, I won't deny.

2
0
Devil

Re: And who do you cal...

Who do you usually call?

I believe that if you build the system yourself at least you will know what's potting; with your Tier 1 brand chances are you wont. If your vendor then tells you its going to cost or it can't be fixed, you nod and say yes. Roll your own, if you're like me and you firmly believe you can do it better than the dude in the (select storage vendor name and insert here) t-shirt you'll see that with a bit of planning and some courage you can do it better, more stable, cheaper, faster and more secure, but you have to roll up your sleeves, not just issue a purchase order...

1
1
Silver badge

Re: And who do you cal...

If you are buying Tier 1, you are probably buying 4 hour support with it, which means that an engineer will be on-site with replacement parts within 4 hours.

As I said above, if you are rolling your own, that means doubling the cost, because you need to keep at least one of every component on-site "just in case" - if it is mission critical.

3
1
Alien

Re: And who do you cal...

I agree, to a point. Doubling commodity hardware cost is likely going to end up cheaper than paying for tier one storage. Add to that the possibility of running your second (spare) system for archive or near-line storage requirements that does not require 24x7 availability and you will be smiling all the way to the bank, and to your performance appraisal. Plan, and get rid of that box!

0
0
Silver badge
FAIL

Re: Jacques Kruger Re: And who do you cal...

".....I believe that if you build the system yourself at least you will know what's potting...." OK, if we just ignore the obvious doubts about you supplying a 365x24 support service (sleep much?) let alone you matching the quality of a proper support team or vendor built hardware, did you actually stop to wonder why people still buy x86 kit from Dell, hp, IBM or Fujitsu when we could all buy bits and build our own? Apart from the convenience of just issuing a purchase order, that is. Then, as an even better example, stop to consider why we all have Windows servers in our companies (well, most of us) when good and stable Linux server offerings have been available on the Web for over a decade? It's not just FUD, thanks, it's actually real knowledge and experience. Same goes for storage, which is why I don't expect to see any EMC execs crying with anything other than tears of laughter at the article.

2
1
Silver badge
Meh

Re: Jacques Kruger And who do you cal...

Why do they still buy from brand names? Because the truth is, most people still can't build a simple home PC.

And some of them have certs.

0
0
Bronze badge
Childcatcher

Re: I agree, but....

"Synology servers (and others) are now commodity items in themselves. Studies have shown (Carnegie Melon, et al) that there is no significant benefit in terms if reliability to buying Enterprise HDDs and this further reduces costs"

We have looked at similar studies as your Carneigie-Melon Univ - and agree on your consclusion

0
0
Silver badge
Stop

Re: ecofeco Re: Jacques Kruger And who do you cal...

"....most people still can't build a simple home PC....." Just stop and think about what you're saying. I can build PCs, Linux servers and even grid storage servers. I've even built servers out of odds and ends and got them to boot commercial UNIX bundles. But I still order COTS storage. The simple reason is I can order a vendor-built storage device and I know exactly what I'm getting, how much it will cost and when I'm going to get it. If I was going to decide to build a grid storage farm instead, well how do I quantify any of those for my boss? First I have to go hunt down prices for all the bits, then I have to actually assemble the hardware and make sure it will actually run the software stack as I expect. That's assuming I get all identical hardware, becuase every now and again you get two parts in identical boxes which are actually different and have different drivers and behave differently, and one will work fine with your FOSS bundle but the other mysterisouly chokes because the OEM never certified it with your build. Then how do I manage it? Most vendor storage devices come with nice GUIs that make configuring, provisioning and monitoring a relative breeze. I've built twenty-node Beowulf clusters before, it's not the same experience, and that's without the fun of having to provision storage. Yes, it I wanted to wear a hair shirt, I could rip out 99% of the COTS hardware and software out of our corporation and replace it all with hand-built and FOSS-running kit, but it would take me about three years and the business would die in the meantime.

0
1
Bronze badge

Redundancy

I don't need a backup server with 99,9% uptime.

What I need is 2 systems with 99% uptime. way cheaper, more reliable, and I won't be called at 2 AM. One fails: it gets repaired/sorted out while we use the other one.

3
0
Bronze badge

commodity has already taken over

Where do you think some of those margins come from?

Most of those leading enterprise arrays run Linux (or perhaps a BSD variant), a lot of them run on Intel CPUs, use PCI busses, use somewhat standard PCI expansion cards (often times you can get the same kinds to put in your own server if you wanted). You'll even find industry standard DIMMs in most of those storage systems.

Most of the margin of course comes from the software, something the commodity space really doesn't compete in. Some folks like to think it does (Hi Nexenta/Coraid). But it's really quite different. I'm sure at some point commodity will be "good enough" for most folks but believe that is still probably a decade away(or longer) at this point.

That doesn't stop folks from trying though. Some may get lucky and have it work (just like some may get unlucky and have their enterprise storage systems shit themselves).

I'll take the risk of an enterprise storage system shitting itself (as it is rare when it does happen), over the daily headache of commodity storage software any day of the week(there is always risk - I'd argue higher risk of the commodity storage software shitting itself too).

You can often recoup your costs (though not always) by driving utilization up on your enterprise storage system with things like oversubscribing, thin provisioning, proper snapshots, automation (dynamic changes etc). Though some storage architectures are better suited than others for these things.

Commodity storage these days really only makes sense for true shoestring operations or massive scale deployments where the company deploying it is writing all their own software anyway (amazon, MS, google etc).

Otherwise the amount of waste is excessive (often end up with many storage pools, with low utilization, and difficult to move workloads between them).

0
0

Desktop PC's as servers ?

For small businesses (up to say 20 PC users), the most cost effective route may be to use the same PCs for the users and the servers and keep one or two spare systems ready to replace any broken PC (whether it is a desktop or a server). System backups to USB3 (or eSATA) external drives which are cheap items. Even a fairly unskilled (and cheap) user can disconnect one PC, replace it with a spare from a shelf, plug in a recovery external drive and boot from it to restore a system - far cheaper than a 24/7 (or even 8/5) maintainance contract. Larger organisations may need different PCs for the servers and desktops but even there having spare servers may well be cheaper than using an expensive server with an expensive maintainance contract.

One thing that IT vendors need to remember - there are far far more small companies than large ones and for most of the small companies, their IT system being out for a few hours once a year (or less frequently) is far cheaper than the cost of the expensive equipment that would eliminate the outages.

0
0
This topic is closed for new posts.

Forums