Someone got out of bed the wrong side today...
I take it another waste of time press release email landed in your inbox before the third coffee?? ;-)
So you’ve founded a new storage business. You’ve got a great idea and you want to disrupt the market. Good for you. But you want to maintain the same old margins as the old crew? Where do we start... So you build your startup around commodity hardware, using the same gear I can buy off the shelf from PC World or order from my …
(Not that I'm on the other side, but...)
I build a storage product that's software-based, easy to use and has some features which will save you significant money over a time period of a few years, with ongoing savings.
You want it to have every feature that your existing product has, regardless if you use it or not. You expect me to have a
You love the idea of it being software-based, but want a certified hardware setup on which to run it. More specifically, you want me to certify it on your own particular hardware, at my own cost, to prove that it works. You expect me to keep up to date with the changes that you make to your own hardware to retain certified. You won't listen when I tell you that purchasing desktop-grade drives and expecting them to run 24x7 with good performance and low levels of failure is unrealistic because I "should stick to the software".
Given that I have certified the hardware, any problem which occurs on it is now my responsibility to find and fix even if it comes down to issues with the hardware configuration, build or implementation. And that is before you put your own customised version of an operating system on it and expect my software to run on it without issues. Or change your hardware spec without telling me, or put my software on an underpowered server "because it was all we had available", or...
So now I am a software company with lots of hardware to support your business, multiple versions of hardware, operating system versions and software to test every time I make a change, staff that are knowledgeable in hardware as well as software just so that they can find your hardware issues when they occur, and a lot of overheads.
You want 24x7 "enterprise-grade support", even though in my company the people who build the product are second line support and not only understand the software but know that fixing the issues are required for the company to survive. With your existing solution unless you have spent over $100MM you don't get anything other than people who follow "support process flowcharts" and are bombarded with customer satisfaction surveys after every call, and you passionately hate the support service they provide.
And because it's "just software", you don't expect to pay any significant amount of money for my product and forget the time and money that was spent building the software in the first place, the fact that in addition to the engineers I need to provide support staff, offices, labs, salespeople, and all the rest. Figures that never made it in to your BOM calculation, that's for sure.
Thinking of which, what the hell were you doing building your own BOM model in the first place? Why do you care so much about the nuts and bolts of the solution rather than the final cost to you and what it can give you? Why are you so upset that we might both be able to benefit from the solution that I am presenting?
And at the end of it all it turns out that you don't want to be disruptive. You just want to be cheaper. But you're too concerned about the unknowns that you go back to your existing big vendor, get a couple of points off their current price, and carry on as you did before.
No.... I just asked you to support x86 hardware and let me decide on the specs and not give me these excuses. Microsoft nor VMware give this excuse for supporting generic hardware.
Yes you can have a level of support for tested certified hardware but attempting to be Steve Jobs and say all or nothing leaves you always being a start-up never the market leader.
Yes I will put desktop class drives in it and let them fail as replacing one a day is cheaper than one enterprise disk a month. I expect your software to deal with this!
No you will never have too little CPU show me one server today int he x86 space sold with a lack of CPU resources.
Stop being lazy and expecting the hardware to cover your shortcomings and be proactive expecting any number of odd failures and ensure your software can cope. Stop looking at disks as failure domains and start looking at hosts or groups in that way evict hosts that have fallen to a certain level of availability and wait for repairs. Move data and carry on.
And if all this is too hard then be replaced by someone else.
JUST WHATEVER YOU DO STOP TRYING TO SELL ME STORAGE BASED ON VM'S IN A SINGLE NAMESPACE it will never be fast it needs to be at the kernel level for virtual infrastructure.
Your post above is exactly why people are falling out of love with VMware.
oh well. it seemed like it was going to save the world a couple of years ago. Just glad a lot of the clever folk seem to have left to go to Omni. Still zfs under Linux seems a little more mature, not sure how much milage there is in these Solaris x64 clones. isn't everything commodity these days including software?
You raise some great points, but this comment scares me:
How, er, great. I’ve not seen a single vendor come up with a feature that is so awesome and so unique that no-one manages to copy it.
The big vendors are NOT innovating anywhere as fast they need to because they cannot.
Their staff are not hungry enough and their marketing teams are too scared to disrupt their existing business. If no one supports the innovators the market will stagnate and innovation will slow dramatically.
So you have a choice, design a unique piece of hardware, build it, test it, fix the bugs, build another one, test it and repeat until you have something that works.
ASICS are expensive to design and manufacture so you need to be sure there is going to be a big return on investment, the storage industry is littered with companies that have gone this route and many of them have failed.
Why, because the time it takes to get these things out of the door into a product, this usually means that someone has developed a software product that can compete at a tenth of the cost. How? They develop in software using commodity hardware using the research that Intel pumps in every single year. There comes a point where the gain in Intel power is good enough to meet the needs of the market, if you can deliver enough IOPS out of Intel hardware why spend $10m designing an ASIC?
More importantly the same code that you have written in software gets faster with each generation of Intel Hardware. Now if there is a bug in the software you can fix the bug with a software update, you can also add additional features over time - you cannot do that with an ASIC.
Convergence is happening at every layer within the Data Centre stack, it is the use of software and the lower cost of commodity hardware that is driving this. The true value is the flexibility to make a difference to the bottom line of an organisation and for them to compete, this is why people are adopting as a service technologies that run on software on commodity hardware.
So if we only do things in hardware and at the Kernel level will we be able to deliver everything that an organisation needs?
The killer for storage is IOPS and latency, it does not matter if it is accessed at the VMware kernel level if the underlying storage is not able to deliver the IOPS within a low latency environment. So accessing a global name space if done right may actually out perform that old spinning rust array that you keep buying because it is hardware based on ASICS from 10 years ago.
So if you can take commodity components and FPGA's, use them together to accelerate the software then you have the best of both worlds. Having a global name space may just allow you to be more flexible within your architecture so that the management overheads are lower and therefore the bottom line to the business is lower. If we keep increasing the amount of storage at an exponential rate and the management overhead that goes with it then surely this is the true race to the bottom.
Flash, dedupe, software and commodity hardware are going to be around for a while you better get used to that global name space. This is the Meccano and Lego for the next generation of Data Centres
but take a look at Bluecoat for software running on underpowered hardware, apparently because normal server hardware would render destroy their market segmentation. Yes you can run it as a VM, its just license-crippled so you'll buy the hardware instead.
Or Check Point for core-crippled software and complete lack of QA on their own "appliance" hardware. If they actually did some testing on their own hardware to make sure it doesn't segfault, I'd be slightly happier. The hardware is just an excuse to force another license sale with an end-of-life and to increase the support fees, for hardware we didn't want to start with. I understand that you don't want to support every linux kernel version, so tell me which kernel version and which drivers you want to support on what hardware and I'll take it from there.
I suppose the point is that hardware these days is too capable. It's hard to milk the enterprise customer when they can run up a 24-core server with multiple 40Gb/s links to networks and flash arrays.
How long will it be until someone puts IP over thunderbolt, turning those oh-so-expensive $30 cables people whine about into rather cheap 10Gb/s links between nodes in a cluster?
If you want to use an appliance and something like dtrace so you can see exactly what is going on, that's great. If you make an appliance with a diagnostic tool which can't be run on production machines under load, you can go away until you do something which warrants that premium price-tag.
I've come to the point where I'm depressed about commercial software. Yes, its often better (for some metric of "better") than FLOSS, but the cost and license awkwardness is just mad. I think I'd get some genuinely cool stuff like F5 and use it to compensate for some less robust systems.
People consolidate to make use of expensive kit... and they have to buy more expensive kit because it supports so many systems they can't afford to lose. How much are you really saving by consolidating to the point of needing multiple 10Gb/s links to your servers? It seems some housekeeping would be a far better investment.
Biting the hand that feeds IT © 1998–2019