14 posts • joined 27 Sep 2010
........... the multi-billion $$$ company have repeatedly failed to deliver what the market wants ............ all they have done is hold the technology industry back with a legacy monolithic architecture that is not fit for purpose for the modern workloads these people run, build a monopoly that allows them to charge extortionate prices for no real reason and have not innovated for years and more importantly wont allow anyone else to because they wont let other peoples kit talk to theirs (proprietary protocol anyone!?)
Was that the answer ................ do I win a prize?
Re: At the end of the day..
That argument has been used every time a closed technology silo is under threat from open technologies and approaches, and every time its always proven false and nothing more than FUD from the under threat incumbents (and their legions of "certified" professionals).
You could equally argue that as many outages are caused by proprietary software that isn't a robust in the first place as you are entirely reliant on one vendor to code it, then support it, test multiple 1000's of configurations with every variant of other peoples technologies (let alone their own). Thats before they have to add features, design upgrade cycles etc.
What usually happens is exactly what has happened in networking, it stagnates. They play it safe and don't introduce anything revolutionary because its too much work. We still have a networking architecture designed 20 years ago .......... and it shows! It wasn't even any good back then!!
No single company can afford to build and maintain a stable platform AND drive the required innovation into it. The only way is to build an open ecosystem and divide the work up, if you architect the platform correctly in the first place then a lot of the FUD is simply irrelevant.
If you want proof of that in action just look at the x86 ecosystem, Linus or even OpenStack. All are successful and all operate on the open principle.
Re: It's good to see
Its actually been standard in pretty much any kit from an enterprise vendor for a long time either x86 based or not (HP, IBM, Dell etc). Even on their low end gear, typically SMB stuff.
What is new is the white box manufactures have finally caught up, so those who are used to building servers themselves can now get the features at a price point that is right for them.
Re: Badly Designed Server = Server running Windows
Oh Eadon just shut the f*ck up will you!!
I'm getting sick and tired on your endless droning around Linux. We get it, you don't like Windows!
No the topic wasn't specifically servers and clouds it was point hardware management. Which occurs more often in smaller set ups where its usually 1 man and his dog trying to keep the lights on. Operating at cloud scale (a la OpenStack) brings its own challenges around HW management that I'm sure you are obviously aware of and that has no relevance to anything in the article (with maybe the exception of IPMI).
In addition trying to link an OS to a perceived increase in HW failure rate (and therefore associated HW management) is at best tenuous with no basis in evidence (do you have any hard statistics you could share?). It smacks of a university level understanding of technology, that being the fanaticism that comes with lots of principle but with little real world experience!
For the record I spend most of my day job in and around OpenStack based platforms, the most popular request in the past few months has been how to run Windows based VM's in that environment. You may ask why and the answer is simple, because people want it!
Re: The blog
This is pretty typical behaviour by the police in London I would say.
I had a motorbike fitted with a tracker stolen about 6 months ago. Tracking system went live 20 seconds after the thieves had stolen it, the company who provided the tracking service kept me up to date and passed all the data onto the Met police control room in pretty much real time.
I actually saw the data afterwards and they had a live track of the bike being moved through the streets and its final resting location. (lock up garage in a scummy estate). It even knew when the thieves were disconnecting the electronics to try and disable it (so they knew someone was in the garage)!
Police simply couldn't be arsed to go there and recover the bike. Took them 4 days to actually get a warrant to go in after I adopted an approach of calling them every 30 mins to ask for an update. Of course by this time it was gone, along with about 6 other stolen bikes that were in there! (just their disconnected trackers left according to the report)
After my experience its no surprise to find that an average of 35 motorbikes are stolen each day in central London (almost none of which are recovered). Some insures are now refusing to insure for theft inside London!
P4000 is already VC only
Take a look at the P4800 SAN. Its been around for about a year now, rather clever piece of engineering. Uses normal blades and some MDS600 disk shelves (probably the most dense drive shelf out there) and operates at 10Gb inside the chassis and outside if needed (quite useful when you want a self contained environment).
Of course the advantage is that P4000 operates iSCSI so uses the normal VC ethernet modules, none of this legacy FC protocol (and all its limitations) here thank you very much :-).
They are interviewing the wrong people!
The problem with cloud and surveys is that they are talking to the wrong people!
Unfortunately in most businesses IT is seen as the problem not the cure and is simply being left out of the conversation! I have conversations daily with IT people that go along the lines of "we want to do cloud, but we need to define what it means for us and how we best implement it" (or occasionally a flat out denial, or sometimes hilariously "yeah we do it ..... here is my ESX cluster").
Unfortunately the whole premise of cloud is that you consume a service NOT technology. This is exactly the language business want to hear (who cares how it’s achieved, as long as someone can sign up to some SLA's). It is getting to the point whereby a business can consume pretty much anything they want as a service (want some CRM, go to sales force. You want office productivity ….. use office 365. SAP? Yep that’s coming as well).
Regardless of whether we like it or not most companies are quite happily consuming various cloud services today. Clue, if you have developers in your business I will bet a tenner that they use Amazon or another IaaS provider today. They won’t bother telling you or even asking permission. As far as they are concerned you simply get in the way and aren’t required. I worry that it will eventually hit a point whereby a company is consuming more services externally than internally, someone will ask the obvious question........ why do we have IT department’s full of techies?
Oh and to the person who mentioned Microsoft BPOS. That isn’t cloud it’s managed hosting (something else that’s at risk IMHO).
I think you have fallen for the hype that everything should be run as a virtual machine ....... (always makes me laugh when vmware tell me I can get better performance as a vm than as a physical :-))
Hyperscale customers HATE hypervisors and "advanced" management, adds cost and complexity for no gain (we are talking 10,000+ physical boxes performing the same function).
You will find that most of these servers will be run with a stripped out linux core running apache or some other open source product acting as a huge scale out farm (usually web facing). In that case 4 DIMM's is more than adequate, throw in a bit of I/O (couple of 1Gb ports will do nicely thanks) and maybe a cheap 3.5" disk to run the OS on ....... and strip out everything else (no DRAC's, iLO's etc) to keep cost low.
For management you only want 2 things, ability to deploy an OS (simple PXE will do) and the ability to reboot the server. Troubleshooting these things goes as follows ........ if its broken turn it off, then turn it back on ....... if its still broken throw it in the bin put a new one in!
Oh and you are right that CPU's are too powerful. Most of these customers are investigating ARM or Atom based platforms (even 1 CPU and 4 DIMMs is too big!) as they are even smaller and more efficient (see the HP Moonshot announcement for what they are doing in this space and it should give you a good feel for what these people actually want and need)
"but unlike those Matrix add-ons, PAN Manager is not tied to HP storage and will work with external storage from other vendors"
Matrix doesnt require HP storage either and will work quite happily with third party storage. There are some additional integrated functions if you do use HP, but there are many installations out there that use non-HP.
Re: Intel's careful words
Allison, I think you need to learn english!
"We remain firmly committed to delivering a competitive, multi-generational roadmap for HP-UX and other operating system customers that run the Itanium architecture."
The clue is the fact they mention Itanium in the quote. If they just said HP-UX then maybe you had a point. I read that quote as they are willing to keep the Itanium development going for as long as there were operating systems that wanted to use it.
Actually it should be Cisco that are worried!
They are moving themselves into a space that is already heavily competitive and HP have spent the past few years wiping the floor with the other 2 incumbents (Dell and IBM).
Also as HP and IBM have proven, just because something is commodity doesnt mean there cant be innovation. It takes alot of knowledge, time and money to make excellent x86 servers (especially around power/cooling and management) and Cisco are starting from scratch, even Dell have learnt that commodity isnt about price.
All Cisco have done is open up their shops to some aggressive poaching by other vendors (if they let Cisco go after their server incumbents, then surely its only fair to extend the offer the other way ............ as many are!).
Cisco are a high margin (read 70% margins even after discount), vendor lock in company who have spents years milking customers as they have had little serious competition. All they have done is make themselves a target and given themselves no armour to defend themselves with. (starting to compete against companies that are used to 10-20% margins when you used to live on 70% isnt a sound business strategy). Love him or loathe him, Hurd made HP into a streamlined, efficient operator (at least compared to other vendors).
Converged IT isnt about taking three companies products and selling them under 1 banner, it is about taking 3 technology streams, merging them and making the solution greater than the sum of its parts. I fail to see how you can do that if you dont merge the R&D departments. Arcadia is nothing more than an SI (and we have plenty of them already).
So what do I get by buying a Vblock, VCE stack, VCN stack or whatever they are called these days! Over buying the same components seperately? What functionality do I get that is different because the overall solution is greater than the sum of its parts? The answer is bugger all! All they have managed to do is let a customer buy a pile of bits with one part number something that SI's and proper vendors have been doing for years.
I hate it that they throw the term "cloud" around as if they actually know what is required to build one. The trick to making a cloud work is all in the software stack (orchestration, portals, catalogues) yet at no point have any of the vendors/coalitions mentioned in this article actually addressed this point. Hardware actually is a secondary consideration, as a pile of bits from three different vendors is still a pile of bits from three different vendors even if you have bought it from one supplier. What you really need is some form of over arching software stack that ties it all together (and does it out of the box, there should be no customer specific integration work required) and thats what HP and IBM's solutions offer. The solutions should be greater than the sum of its parts. Oracle do not have a cloud offering BTW!
There are three features that are a MUST in a cloud environment, inbuilt orchestration, inbuilt service catalogues (that should take into account apps) and service portals. Show me were this is in any of the products mentioned in this article?! All you really need in the hardware is the inbuilt integration points to let the sotware hook in and drive it.
All Vblocks and FlexPods are, are static bundles of hardware with none of the actual components required (pre integrated and out of the box) to do anything remotely cloudy!
"Every supplier involved in the Vblock and FlexPod efforts will be privately pleased that all the others are there at some level. This demonstrates a degree of openness that is missing from, for example, the HP and Oracle integrated stack offerings."
No it doesnt!! Openess means the ability to rip out one vendor and replace with another WITHOUT changing the architecture or removing functionality, Could I for example use HP servers instead of Cisco in a Vblock? Could I use IBM storage instead of EMC, could I take a VCE environment and replace the E with an N (EMC with NetApps for those not paying attention) without affecting the rest of the stack or having to rework something. The answer is no, these systems are far more closed than any solution out there. Let me give you an example can the customer choose which hypervisor to use? A hypervisor is just a tool for resource pooling so why do I have to use vmware when Hyper V is probably good enough? A decent stack should allow the customer some leeway in individual components. There is no true open stack out there but some are more open than others.
Re: errmm, no
You really dont know much do you!
First off Compaq didnt invent it, it was actually a DEC product. I suspect you have never even played with one which is why your comments are so ill informed (if you had you would recognise the WWN set from the DEC pool for example).
Secondly it isnt behind in speeds and feeds its still more performant than most midrange arrays (and all the major ones at least) and more importantly its still as easy to use (unlike most other arrays) the only area it lacks in are features like thin prov and auto tiering (soon to be corrected from a roadmap I have seen).
Thirdly there are very few arrays that can ACTUALLY do true online firmware upgrades. If you really understood arrays you would understand why (clue, its bloody complicated to engineer, which is why most cant do it!).
Yes iSCSI is poor for the same reason that the EMC implementation and the NetApp implementations are so poor (infact most implementations except P4000/Equalogic). Yes it doesnt do NFS and CIFS so what its an FC block based array, whats your point?
Even bigger fools than those who buy the XP eh? (sorry did you mean XP only or are you lumping the HDS resell of the same tech (Hitachi is the developer not HDS!) USP-V in there as well?) Well this definately suggests you dont know what you are talking about! Those that buy the XP will continue to do so for good reason, 3par whilst a good buy isnt half the array that the XP is! (think mainframe/UNIX environments and their requirements compared to x86 and you will see why).
As for the rest of your comments Im not going to bother as you are entitled to your opinion.
Re: Mark Twomey
"What we see in the Hitachi OEM’d P9500 is a very backward-looking design statement, this is monolithic system design circa 2005. There isn’t an inkling from anything here that the P9500 was designed for the Private Cloud concepts being adopted by customers and productised by VMware and the like."
Actually Mark youve just described most of EMC's products there!
Also what is this constant use of the cloud moniker! Those of us who actually know what clouds are dont call them clouds and certainly dont try to dress anything up to be what it isnt. I pity you if you think that everyone either needs one or is trying to build one!
The P9000 range will be what the XP range was, an array for customers who cannot and will not tolerate anything less than absolute date integrity and maximum performance! Those customers that want to build IaaS will buy the appropriate products (from the HP stable P4000 and X9000 spring to mind as a starting point). Both approaches require different design methologies if you dont understand that then I suggest you probably dont understand the question and shouldnt be commenting!
Also Chris M are you going to get a competitor to quote on all vendors launches (maybe ask a HP or Hitachi person to quote next time EMC release an array?)