Is Ashlee still at El Reg
Did he get a legitimate job with the New York Times ?? ..where he can make stuff up and they'll publish it ??
65 publicly visible posts • joined 1 Apr 2007
I used to work there. So much stuff is automated (they called it MECHANIZED, harking back to the first mechanical switches) that they don't need 60% of the employees they have. They send management out to do REAL work for a change (fixing inside wiring and collecting coins ... well they USED to collect them when they had pay phones. They save enough doing this for three days to pay for whatever concessions they make.
This is ESPECIALLY onerous if you are at the end of a life cycle. Imagine making a huge investment in "P" class blades, only to see the new "C" class blades supersede them next year.
Vendors will promise overlap, and then hose you in the few final months. GET ROAD MAPS from you vendors, and get promises of support IN WRITING. You will never get it, and the roadmaps you CUSTOMERS will see will likely be 6-month jobs.
Realize, though, you can get a ton of value from Blade technology, and most customers tend to select a single server vendor anyway (for spare parts advantages, sophisticated management features etc.) Nothing INHERENTLY wrong with proprietary in that sense.
If you know the "series" of enclosure, blade form factor, infrastructure etc. is nearing five years, you can bet your vendor is AT LEAST designing the next generation of incompatible successors.
If you can, get 36 month leases with "tech referesh" in mind. If you can, at all, abstract your software (Virtualization, streaming, diskless boot, etc.) you can shield yourself to some extend from hardware changes.
Virtualization reduces your servers to files on a disk making you much less dependent on what vendor supplies the hosts. "Rip and Replace" goes away, if you can live migrate from one platform to another.
Just be careful, and pick your platform wisely, as you won't be migrating from AMD to Intel or vice versa.
In this game, Citrix XenServer is CLEARLY a larger immediate threat. Though you pay a nominal charge for the "hypervisor" the management is free, and HP's version is KILLER !!!.. especially for branch offices where VMware is anything BUT cost effective. Throw Xenserver on some blades in a "shorty" enclosure, plug it into a 110 socket and manage it remotely from HQ.. HP finally has a "data center in a box" it can drop anywhere (and it is virtualization that makes this more than a marketing buzz phrase.
Even if you blow 5K on Virtual Center you will need to cluster it to be safe ... and light up another SQL server...
MSFT may take the marbles long term, but you can see VMware behaving like the desperate organization they've become, and in the near-term it looks like Citrix is making whopping strides every quarter to play catch-up with ESX, and nipping at their heels faster then VMware likes to admit.
This is tacit proof that the hypervisor IS a commodity, and VMware has put too high a premium on its collection of goodies to withstand competitors who offer value without burden.
James:
You are right about Virtual Iron on almost all counts, but despite their well-developed features and dirt-cheap pricing they don't stand a chance alone, and I don't know who could possibly buy them that would help.
They have next to ZERO channel momentum, and if the most channel-friendly and channel LOYAL company on the planet (that would be Citrix) can't get first-tier reseller to wean themselves off the VMware teat, what chance does vFe have ??
All the resellers I speak to HATE VMware with a passion, and ADORE Citrix, but are addicted to the run-rate, and have invested a lot of time and money to train their SEs.
Virtual Iron has what ?? A sales force of a dozen ??
Ashlee is probably spot on about it being a mistake to sh*tcan DG because of VMware's market woes.
NO company can expect to maintain 95% market share forever, and with both Citrix and Microsoft double-teaming them, VMware did the best they could to inoculate big customers with long-term agreements (for slim margins.)
However, with nothing left to do but charge a premium price for a rapidly-becoming- commoditized product, or begin looking at little add-ons to sell like B-hive to keep the boat afloat.
Customers smell their fear, Wall Street smells their fear, and all of a sudden those nice laid-back VMware Salesfolk and SEs turn into EMC versions. It's inevitable when Sales Managers drive that kind of defensive do-or-die behavior.
It really doesn't matter at this point who runs the company. The days of explosive growth are done. Microsoft and Citrix will be enhancing Hyper-V and XenServer VERY rapidly, and within a year ot two will erase any feature-function advantage VMware has built up over the decade.
VMware still has an excellent product and a large, satisfied user base with a significant investment in their flagship product. That being said, it is no longer a rising star growth company, but a cash cow, and its role as a strategic IT provider is rapidly diminishing.
Any competent manager can now guide this company off into the pasture, and maybe it's better for Diane Green that she not be the one to do that.
Yes, it is a PITA now, but Citrix is not letting their money go to waste. I suspect by 2009 Xen will be a very different beast with all the features of VMware and multiple times the performance. Quite likely, it will leave Hyper-V in the dust on all counts, but of course MSFT fans will choose that path just because ....
KVM is getting a lot of love from Linux geeks, and will certainly help fragment the market somewhat.
What we all need to consider is that everyone has nowhere to go but up EXCEPT VMware who has nothing to do but lose ......
The future of multi-cores any more is to run multi-instances. as Hypervisors become commodities, OS licensing will have to follow, and we'll be running discrete apps on their own mini-OS. No more conflicts when each app has its own personal OS ... Hell, dedicate a core to an instance... plenty to spare when you have a dozen !!)
Comcast began as an "all-you-can-eat" model. If you go to a buffet, do they charge you more if you are fat ??
Steady customers of buffet restaurants would SCREAM if they were singled out as "heavy eaters" and charged a premium.
Comcast is making plenty of money. If they change their model, then there is always FIOS.
Competition is a GOOD thing !!!
Why not explain what you are talking about, with maybe .... oh .. examples ???
Are you trying to say that in 1995 some company was PXE booting entire Win95 images over a 10 megabit network, and THEN streaming apps onto it ??
If they were. do you think perhaps the technology was possibly not yet robust enough to accommodate the technique ??
Feel free to use buzzwords, in your reply, but CONTENT and facts would be nicer..
Duhh ... Did you notice that NetApp integration is BUILT-IN to XenServer ?? Anyway, did anyone read the Press Release far down enough to learn what Platinum is ?? Well, I'll explain just in case.
Platinum includes PROVISIONING SERVER (from the Ardence acquisition.) What this buys is that a SINGLE OS IMAGE can be STREAMED over the network to the VMs hosting the desktops. Therefore, instead of housing 1,000 9GB desktop images, they can store ONE 9GB XP or Vista OS image and STREAM it over the NETWORK via PXE boot to an awaiting DISKLESS VM !!
If that were not enough, they can then stream the apps INDEPENDENTLY of the OS through the old Citrix technology XenApp (AKA Presentation Server) thus DECOUPLING the apps from the OS sparing not only DLL HELL, but meaning only a SINGLE OS IMAGE needs to be patched, scanned for viruses etc. If an upgrade breaks someting it can instantly be rolled back just by rebooting the guests.
Basically, that means 1,000 desktops can reside on a 35GB or disk. Think that pisses off EMC ???
I am guessing that is what Ashlee meant by storage LIGHT.
Intel did this a year ago. Chipmakers somehow delude themselves into believing they are "business partners" with end-user customers, and had hired "Business Development Managers" for verticals like Health care and Content Creation.
Spend more money on engineering and you won't NEED an army of salespeople to convince IT your product is better. They're smart people. They'll figure it out.
Generalist BDMs who REALLY can explain architecture and roadmaps and strategy are helpful and necessary, but "specialists" who do nothing but buy lunch are a waste in this competitive market.
I'm still not sure most people understand how this works. It is NOT a cluster, and at the highest level (System Fault Tolerance) there is no concept of FAILOVER. FAILOVER assumes some latency (up to and including re-starting the guests and then the applications without any real regard to their respective QoS.)
Marathon is running one instance of an OS on two guests SIMULTANEOUSLY (Lockstepping) which is how Tandem and Stratus do it (but with proprietary hardware.)
Data Centers don't always have a lot of choice in the applications they buy or the quality thereof. I think it is great that a company can make crap run flawlessly. Face, it, there is a lot of crap out there, and aside from coding your own (which is usually cost-prohibitive) all you can do is apply bandages. If someone makes a quality bandage, I think they should be applauded.
BTW, Virtualization is one of the BEST concepts ever to hit IT, and I think history has already proven that. Now it's a matter of making it open and ubiquitous.
Xen is helping to do that. Marathon is making it more viable for mission-critical apps.
A Tandem for well under $100K ?? The reason this doesn't run on VMware is a combinations of "closed source" and millions of lines of old code. Xen was a natural for these guys, and finally, shops that already have locked-in contract with VMware have a reason to look at Xen.
There's a HUGE difference between "high Availability" and "Fault Tolerance" which si what these guys offer (and let's face it, if you're consolidating 10 servers down to one, having to buy a spare is downright CHEAP for what you get.
I think some readers are missing the point. This is not FAILOVER. The is KEEP ON RUNNING .....
HP has always had good support for VMware and MSFT Virtual Server (who cares?) with the ProLiant Essentials add-ons to SIM. Actually, they fill in a lot of gaps (P2V, V2V) that XenServer has in their Citrix offering. You'd have to go to third party products and pay more than what HP charges for such things as migrations, etc.
I understand flash drives can die, but I suspect that these will be limited to booting the Xen OS, and will not otherwise have a lot of I/O.
Granted, booting off a Flash drive is no incredible feat. But don't forget that before Citrix forked over such a large chunk of change to fill it's server Virtualization gap with XenSource, it had filled in a WORKLOAD DELIVERY gap with the acquisition of Ardence a year prior.
With the Ardence product (now dubbed Provisioning Server) .. [Why not XenProvision ??] they have a game-changing solution. While Provisioning Server cannot yet stream the XenServer platform to Hosts, users can go absolutely DISKLESS by booting Xenserver from flash drives, and STREAMING guest instances over the network.
In most cases, they can use a SINGLE IMAGE of a guest OS to stream HUNDREDS of OS instances, and workloads from shared storage devices to server farms that have NO INTERNAL DISKS !!
Bad news for the likes of EMC (and granted, HP, IBM, HITACHI et. al.) when data centers don't need to store (and patch, and update) hundreds of one-to-one images.
Good news to the carbon-footprint-worrying crowd who can stop spinning (and cooling) a pair of mirrored 15K disks in every host in their racks.
Virtualization may be getting increasingly commoditized, but Citrix knows a thing or two about streaming stuff over networks, and regardless of whether Xen or Hyper-V end up as the chosen hypervisor, this is still a real solution to an enormous problem.
.. as well as tendency to play rather loosely with facts vs. your opinion, the market seems to have chosen HP as the preferred vendor of blades (or so the IDC numbers would indicate.)
Your tendency to use disparaging adjectives like "toy" and denigrating HP's engineering (which apparently has resulted in pretty good power numbers in real-world test) would seem to indicate that you may have some vested interest in promoting IBMs less popular platform. Employee ?? Reseller ??
IBM has released a number of Youtube videos which were pretty ludicrous in their attempts to disparage HP's kit. So far, have not seen such retaliatory behavior from HP.
I do not sell physical servers for a living, so I have no bets on either (or Dell for that matter.)
In the end, the market will decide, and considering the proprietary nature of blades, and their "stickiness" in data centers, I understand the desperation partisans feel.
However, until the day I am a better engineer than the boys in Houston, I would not criticize nor second-guess their designs.
As I remember HP gave customers nearly 18 months to transition from P to C class. maybe you can order them, but they are end-of-Life because HP couldn't pur current technology in an old design (and neither can IBM despite their claims.) Backward compatibility doesn't count unless you can actualy turn the blades on !!
To quiesce an unneeded power supply is not the same as "turning it on and off" anymore than a TV is ever actually "turned off."
Granted Dell may not do the extensive testing of an IBM or HP, but "untested" is laughable.
Dell poached a guy with intimate knowledge of the c-class design (whatever happend to non-compete contracts?) They probably did a pretty good copy.
I wonder what their strategy is to combat IBM and HP inn terms of management software. HP in particular is pretty good, especially at accomnodating virtualization.
Between VMWare and Xen physical servers are becoming ever so much less relevant as each day passes.
Why doesn't MSFT just buy Citrix and call it a day ?? It seems all the rage these days for the big boys to own some 'open source.'
They could do it by proxy if the bought Citrix (and left them intact.)
It might wreak havoc on all the customers and channel partners who love the squeaky-clean and scandal-free Floridians, but maybe some of that mojo would rub off on Redmond.
Are they waiting for Citrix to gobble up Virtual Iron first ??
$500 poorer for having purchased the suite of books, I'm convinced that anything of value from ITIL 2 has been totally obscured by the dense chart-and-graph laden obtuse pontificating that makes up these new volumes.
Anyone clever enough to find direction specific enough to actually IMPLEMENT anything from the new version will surely be earning their consulting fees.
I can't wait to see what the tests will look like !!! I suspect we'll see far fewer folks lining up for Foundation Certs.
VMWare is not well-liked by the channel. Citrix has a real opportunity here on both price and performance with their Xen-based products. SMB is not a risk-adverse as large enterprises.
Old-school companies will continue to buy 1st generation virtualization, but the more nimble smaller companies already know and trust Citrix.
When a company is run by spreadhseet, they are willing to toss out the very people who are IN THE PROCESS of making growth possible. HP is canning Blade and x86 Server Sales Specialists who are the very ones responsible for the 91% growth. When was the last time you saw an HP-badged Systems Engineer ??
By focusing on the top-tier accounts and leaving the rest to the channel and outsourced support HP risks losing the critical mid-market where the growth is. They would rather hire inexperienced (cheap) help while jettisoning experienced knowledgeable employees who are costly to maintain.
Ultimately, the customer, if they get ANY love from HP will get it from out-of-college Spanish majors who work for peanuts and know next-to-nothing.
Hopefully, they will give some support to the VARS who are expected to fill the void. Only time will tell.
It seems a pretty good match for companies that care only about price. I suspect the IBM and HP customers will stick with VMware. If they go for a Xen-based approach, it's more likely they would mitigate risk and look to a Citrix who have a little track record and employees beyond a dozen or so VC-funded Vice Presidents.
Uhhh .. Excuse me ?? Virtual Connect is NOT a switch. HP is correct that is will connect to any NPIV compatible SAN switch (which is ALL the major players.) VC is a PASSIVE device and neither adds to not subtracts from protocols (in either Ethernet or Fibre Channel flavors.
Also, the whole POINT of these "Virtual I/O" devices from both IBM and HP is to eliminate 1-1 wiring (which can only be done with Blades because they talk across a Midplane.)
Granted, IBM and HP (and everyone else) have proprietary blade chassis designs, but these essentially lock out upstart "Virtual I/O" vendors who MIGHT have a pretty good story in the rack mount space.
They both have pretty good stories (I can't wait to see what Dell says.) HP has been out in th real world longer, so IBM will have to play catch up (and considering the market share drubbing they've gotten from HP, it had BETTER be more than a "me-too" product.
What was the last NEW server vendor to make more than 1% market share? There have been numerous startups to try.. Like Fabric7 ?? Belly-up despite INCREDIBLE technologies.
IT wants to mitigate risk. They don't do that by relying on less that Tier-1 hardware for their platforms (unless They're Google ..)
Repeat - It doesn't matter how good your solution is if nobody will buy it !
The problem with these products is that they need to be adopted by the server vendors. If HP / IBM /Dell don't integrate this into a blade, it doesn't play because in blades, i/o communications take place across a proprietary midplane.
That's not to say a Dell couldn't jumpstart a VC-similar product by partnering, but chances are they are too far along in their C-class clone to integrate something like that quickly.
It may be true data center cannot take advantage of the density allowed by blades. However, they will go for all the density they can, and blades offer more compute power per square inch than rack servers.
Also, sharing of components like power supplies and fans allow blades to require LESS overall power and generate LESS overall heat than traditional servers.
Combine that with power management such as HP offers, where procs can be dynamically scaled back based on application requirement (or lack) in conjunction with pre-set limits per rack, it is clear blades (because of intelligent enclosures and software/firmware) have advantages over dumb servers in dumb racks.
I don't see any Tier-1 vendor going with Blades only, but that does not mean they are selling because they are cool or sexy. IT heads are a little too smart (and way too boring) to make decisions based on sexiness.
Cable reduction is a big deal, especially with SAN. Shared PSUs and fans also make sense. The real point (at least with blades from IBM, HP and Sun) is management and instrumentation that cannot be replicated with traditional servers in sheet-metal racks.
Virtualized I/O, headless, automated deployment, "hot-sparing" all rely on proprietary infrastructure.
VERY good for the vendors who win. VERY bad for the competitors who lose.
Customers give up easy switching between suppliers, but gain a lot from R&D which can only be spent in making vendor-specific features.
I'm not sure HP is counting on the business of "Cobble-your-own" PC types to make its revenue goals.
That being said, when was the last time you cobbled your own printer or scanner?
I've had Laserjet IIIs I had to take outside and shoot in order to replace. Likewise, I gave up my 15-year-old HP scanner ONLY when I could no longer fit the ISA SCSI card in any new MOBOs.
Anyway, the downside of selling Consumer stuff is that it dilutes HP's image as an Enterprise-Class player such as IBM, Sun or EMC (when, in fact, HP is trumping both in growth.)
IBM was wise to dump the low-end off on Lenovo and concentrate on software and services where the real margin is. Maybe it's time for HP to spin off the cameras, injkjets and Walmart shelf-candy, and get taken seriously.
And Ashlee Vance was about the ONLY reporter I can think of to believe in HP after the Compaq acquisition. When the WSJ, NYT, Ziff-Davis, and the rest of the herd sheepishly denounced the merger.. ONE MAN stood alone and saw the value.
The Reg, (and Ashlee in particular) should be applauded as the only medium who saw "beneath the covers" and refused to follow slavishly with the rest of the Media (and Wall Street.) If you trace back with Google (and you can) everybody was applauding Dell (then the Wall street Darling) and doubting the possibility of a combined Compaq / HP. The naysayers were sure the merger would fail. Sure, it took a ride through the Carly days to emerge a success, but the fundamental concept proved a good idea. Nowhere to go but up !!
Ashlee showed guts to buck the tide (and I hope he bought some stock !!! ) He deserves some reward for courage (as well as insight.)
And HP has been gobbling up sottware companies like gumdrops. And Don't forget, when Ashlee referred to a "Non-Stop" blade, he's talking Non-Stop-KERNEL as in Tandem. If I had my life depending on a server, I'd want it to be a Tandem design running NSK. All though stock exchanges and Telcos and power grids can;t be all wrong.
It's not like HP and IBM have jettisoned THEIR rackmount and pedestal servers. Dell is just touting it's strongest card until it can push a C-class lookalike out the door.
If they are pushing this line, it is probably to prepare the shareholders for a resounding failure when they find out it takes more than a rectangular cabinet to make a Blade System (or Center, if you prefer.)
Dell is just not an Enterprise-class designer, and that's what it takes to do blades right. HP and IBM (and even SUN!!!) have the engineering wherewithal to create a proprietary standard that customers can trust.
Dell is still trying to push the supply-chain story against an increasingly complex IT environment.
When I read about their intention to fire "overpaid" employees and replace them with lower-wage workers, I decided that the called-upon boycott was justified.
I'm sure the fact that I did not buy a 60" set from them didn't kill their quarter (actually I didn't buy one from any of their competitors either ..) but I hope some of their woes are caused by average working joes choosing not to support a company with such policies.
I wonder what kind of bonuses the EXECs brought home ....
There's a lot of truth to what you say, but there's always room for a paradigm shift. Maybe this time around, energy costs will be the impetus for taking an extra power supply off everyone's desk.
There's really nothing to lose from this model which has remained basically unchallenged since around 1981 ??? Maybe Corporate America will finally take the "personal" out of PC and give everyone "smart terminals" to save a couple of bucks.
Ashlee and you are right, we've been down this road several times without anything catching on, but never underestimate the power of corporate greed. When the CFOs believe they can save a few bucks a year from each user by taking away something personal, they will JUMP at the chance.
I have nothing against Sun. They so some brilliant stuff (despite the fact they are run by software guys who don't see the need for RAID controllers because SOLARIS can do that in SOFTWARE....)
I'm not suggesting that they designed the 6000 specifically to address shortcomings in the 8000 series. Hopefully they made this road map clear to their customers, but they sure did not publicize it much before announce.
As per IBM and HP's designs, obviously longevity has a role in maturity of support. Both IBM and HP (from Compaq) have a long history of providing "commodity" x86 servers, specific skill sets that Sun lacks. They both have long histories of partnerships with OS vendors (beyond their own Unix flavors) and server-specific management features, especially software tool sets) which are necessary when deploying servers in volume and around the globe. Fitting into an IBM or HP infrastructure is one thing, but reporting up to a Configuration Database (as ITIL suggest is a best practice) can only be done with MIBS that can reach deep enough into the hardware to provide such detail. Proprietary they may be but both IBM and HP spend a fortune developing their respective management tools, and Sun and Dell do not. Perhaps CIOs who make purchasing decisions don't care, but when Admins in the trenches who want to get home on time have a say, they will pick a server with a robust management infrastructure.
I'm not sure you really understand what Virtual Connect provides ... NEMS are just aggregation modules. Please read up on what VC actually does.. It truly is a unique value proposition (and even IBM has to cobble together a bunch of Cisco gear and shims to make anything similar work.) In the HIGHLY unlikely event that an I/O module should fail, there are sufficient redundant ports on any given blade to accommodate this, and with VC, it is pretty simple to migrate any specific workload to a "hot spare" blade to repair the module. The MAC and WWN belongs to the ENCLOSURE SLOT and not the physical blade.
I have no data to suggest anything, but I have seen the real estate on half-height blades that suggests removing two hot-plug drives and a BBWC module could conceivably leave room for 16 DIMM sockets in that form factor. Perhaps it WOULD be impossible to cool, but this is all hypotheses.
As far as cost-effective I/O modules are concerned, it appears to me that at least when you look at HBAs, the blade form factors cost HALF what a stand-up PCI card costs. In 25 years of working with servers, I/O component failure has probably been one of the LEAST significant issues I have witnessed. Again, give
Sun points on an RFP, but not enough to necessarily seal the deal as a "must-have" (unless they write the RFP themselves.)
I do maintain that a portfolio of half and full height blades gives a customer more choices to accommodate specific tasks.
Sun will obviously sell some of these into their installed base, and prevent some further erosion of their x86 business. They certainly deserve that for their efforts.
No, the 8000 was NOT a conventional blade, but it represented Sun's entry into the market a year ago. When it did not go well, they came up with a more viable (incompatible) alternative. Anyone purchasing blades would naturally be wary of this strategy when investing in infrastructure.
Bechtolsheim is a smart guy, and the 6000 is a nice design but they still lose on overall density. Not all blade customers are running full-blown virtualization clusters. Doubtless, HP will be able to shoehorn 16 DIMMS in their next gerneration (AND cool them.) If they went diskless, they could probably do this in the half-height models and just boot from SAN.
Sun did some nice I/O design, but HP's Virtual Connect is pretty unique. IBM and HP each own approximately 40% of the blades market, and have each been in it for over five years.
Also, they are both on second-generation designs, having learned a few things from their earlier attempts. Sun may have an advantage for a couple of months for hard-core VMware shops that want bargain memory. In the long run, it will probably be less expensive to stick with IBM or HP, pay a premium for 4GB DIMMS and take advantage of the mature manageability tools and robust infrastructure (switches, SAN certification testing, etc.)
Customers are likely to use Sun's checklist item as a pricing leverage against IBM and HP, but very few server RFPs attach 90% of their weight to memory capacity.
Tier-2 vendors can buy blade products from Taiwanese design houses, and maybe sell a few to Tier-3 customers that HP and IBM (and Sun) don't want to bother with anyway. Any enterprise of reasonable size and stature will likely have a whole team of decision makers add their $.02 so that no single feature will make or break the adoption of a blade standard.
No room for one-trick ponies, when you bet your business on a given technology.
Hey Dave - Does that IBM BladeCenter come with a spell-checker ?? <g>.
Anyway, to your point, Blades are an investment as much in the vendor as in the technology. No sense buying a bunch of chassis, just to have the vendor decide they're not getting enough market share to continue producing blades to fill them.
For all their advantages, they are, and will likely remain, a highly proprietary architecture. IBM and HP have pretty much divided up the market. That doesn't mean that Sun won't survive, but I would hate to have made the decision to buy a bunch of 8000 chassis only to have the 6000 come out less than a year later.
Dell has already exited and re-entered the market only to be allegedly planning a whole new architecture in the fall.
As for DIMM density, I assume everyone will eventually provide the maximum allowed by chipsets, but what happens when INTEL dumps Fully Buffered DIMMS (which is the technology, with associated chipset) that allows that killer density ?