What do you call a large and integrated piece of hardware and software that runs hundreds of virtual machines running Linux? That's easy, it's an IBM mainframe. But hang on, it could be a blade server system running VMware. What's the difference? Interestingly, both systems look like big racks with lots of shelves. The mainframe …
Hitacha as a bladed all-in system candidate
Hitachi supplies blade servers and Hitach Data Systems supplies disk storage. Add in networking from somwhere and there is the potential for a Hitachi integrated server + storage+ networking system too.
I can see only Sun to be able to do that...
-New promising storage appliances
-Scalable Servers (Blade and traditional Systems)
-A Hypervisor to run Windows, Linux and Solaris
-A scalable reliable multi-purpose Operating System
-Activities in Networking projects @ opensolaris.org (Open Network ?)
-Management tools coming out (xVM Ops 2.0 and Q-Logic)
There is no other company that can deliver all from one source.
I agree that the trend is becoming obvious.
A big benefit will also be that having one vendor removes the tiresome inter-supplier arguments when something goes wrong. 'It's not our box, must be the other guy's kit..etc etc.'
Also prompts some new thinking about where the apps and data sit - on the client or on the server? (i.e. 'local' cloud).
Thin client anyone?
Back to big iron and VDU's - full circle then!
What you seem to have missed here
Is that the component mash ups of compute, storage and the various networks glues are an attempt to resist the complete commoditisation of the hardware in the data centre by the players who still want there to be big margin in tin.
In a properly virtualised or grid data centre nobody running or managing apps cares whose blades are running the hypervisor or middleware because they can't tell, the virtualisation or grid layer completely abstracts this. Equally storage and networking are being 'virtualised' and as customers realise that big iron storage is a waste of money and the storage tin gets hidden behind smart file systems and brokers that are more tolerant of storage network performance the tin is increasingly irrelevant.
Roll in the fact that the IT equipment life cycle is shortening and the tin itself needs to be completely invisible and therefore is completely commoditised with no value differentiation beyond performance per watt or performance per $. This is really bad news for the big tin vendors, thus the attempts to convince their customers that it is not so.
HP surely already do this in its' blade systems. The c7000 enclosure is racked and stacked with EVA or MSA storage options, it (HP) has it's own procurve switches and management agents (insight manager) as well as virtual connect technology.
It sells VMware, Citrix and Microsoft virtualization solutions and so surely is already DOING what this article "predicts". Admittedly, I don't think the Procurve switches have been aggressively marketed but they're still there if you want them.
Strange article and seems about a year too late.
You make some interesting points that I found quite valid - save for one :
"the IT equipment life cycle is shortening"
Now, I don't have any data on the subject, but intuitively I'd think that, in these times of crisis, companies would have a tendency of keeping their existing kit instead of buying even more new stuff.
After all, the failure on Vista in the corporate market, where its uptake was far less than anticipated by Microsoft, is a telling sign.
Have any figures you'd care to share ?
We're forgetting some things here...
Firstly, what does the customer want?
The customer will have perceptions around what is best for them and what is not. For example, although HP can offer blades with procurve networking, they may not like HP's approach or they may insist upon remaining a Cisco shop. Another example is where the customer likes the look of a blade vendor, but chooses to maintain a different storage platform.
Secondly, who's going to push the iron?
VARs will sell customers what they want, but they are most likely to lead with products that give them the best chance of success AND the highest margins. Second tier storage companies (Pillar, 3PAR etc.) get to play because they pay the channel better than EMC and NetApp and give the VARs the ability to differentiate themselves from being just vendors.
Thirdly, why blades?
A lot of companies have moved away from blades due to not being able to support the power density in a rack. Blades have some distinct functionality and manageability advantages but I see a lot of companies looking at blades that should instead be looking at regular rack dense servers.
Fourthly, why VMware?
Yes, it's the leader in virtualisation software *today*, but how about Hyper-V? Xen? Virtual Iron? Oracle virtualization? How about the other dozen virtualization technologies that are coming?
Fifthly, the world is full of standards.
What's the inherent value proposition for a customer to acquire server/storage/networking from a single supplier? This article hasn't made the case. There has to be more to this than just a single telephone number to call.
Storage standards - FC, iSCSI, networking standards - 10GbE, GbE, etc means that companies CAN choose products from different vendors and not have to worry about compatibility.
Sixth, EMC selling servers?
Of course , nothing is impossible and if EMC execs can see profit in doing this, they will.
They are a server niche player at best and if any company were to place their bets on a partnership with them, they could be severely limiting their market reach. And why did you add Verari but leave out Rackable?
@I can see....
Now you've done it - you'll have woken Matt "PHUX" Bryant.
Don't feed the troll!
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- IT MELTDOWN ruins Cyber Monday for RBS, Natwest customers
- Google's new cloud CRUSHES Amazon in RAM battle