When HP talks about "blade everything", it means freaking everything. The hardware maker has pumped out a blade server running its NonStop operating system and software of all things. The NonStop NB50000c BladeSystem will fail to shock loyal HP customers. We spotted the server on HP roadmaps almost a year ago. HP has been …
Re: you're obviously paying an awful lot
There are various reasons why the NonStop systems are beloved of large financial institutions, among which:
1) You never had to (or have to) take them down once a month or so to reclaim memory leaked by the OS.
2) They are the gold standard for transactional integrity so you never had to tell anybody "Sorry about your missing 10,000-share trade, we don't seem to have any record of it that I can access at the moment."
3) Programs written in the 80's are still earning their keep. Since Tandems have traditionally been back-office systems, software investments get amortized over ridiculously-long lifetimes because they don't get rewritten every four or five years as a result of language wars.
4) Tandems were designed to make their owners happy, not their programmers: the exact reverse of Unix. As a Tandem programmer since 1979, I know whereof I speak: some of the Tandem utilities and the native command shell downright clunky compared to Unix. I also recall reading an early Unix paper bragging that a development machine had stayed up for a whole week. Tandems usually stay up until a new OS rev has to be cold-loaded (months, sometimes years). ATM networks love this.
5) I still maintain venerable COBOL transaction-processing code that is not only well-amortized, but when plugged into the Tandem architecture provides NonStop, ACID database updates with absurdly scalable performance that nobody else can touch. On the multi-node Tandem architecture, systems don't failover to each other; transactions fail, get backed out and backup processes distributed across various processors pick up the slack.
Although COBOL isn't my favorite language, it's the end result that matters: mission-critical enterprise processing reality, not just the hype, and rock-solid since at least the mid-eighties. Sometimes I get to write Tandem stuff in C, but for the most part I indulge in shell scripting, Tcl/Tk, Python etc. at home. But I digress...
Getting back to the putative subject of this comment (paying an awful lot), the bottom line is that Tandem provides great TCO and a solid, scalable base for mission-critical enterprise systems. As for the alternative, you can duct-tape ten million pigeons together but it will never quite replace the 747.
Where's HP's pp party, party, p party, p party people at?
Well? It must be some club.
FREEdom? Are you kidding me?
$300K for a chassis and two blades with only one itanic chip each? Don't you think HP could have put two itanic chips on each blade?
1) What happened to commodity intel pricing?
2) I bet you have to trash the whole system when Tukwila comes out in 6 months.
3) Help I am still using SCOBOL and TAL and I can't get out! (damit)
check the specs...they put the defective chips in the Non-Stop blades
All Itanic chips have 24MB of L2 cache.....when they have a bad sector they turn off 25% of the cache. Sad that the $150K per blade solution has the defective chips.
>> $300K for a chassis and two blades with only one itanic chip each? Don't you think HP could have put two itanic chips on each blade?
>> 1) What happened to commodity intel pricing?
Did you *read* the article???
You're not paying for the tin here, your paying for the nonstop software and lock-step architecture. Take out the nonstop components and you could have a 2 blade c7000 with BL860c blades that have the same type and number of ram/cpus for about $25K
You should be comparing these costs to those of IBM's System z mainframes, *not* to white box intel servers...
Need not to stop - try NonStop
Tandems, or as they are today HP NonStop systems are and have been amazing since first 16bit systems back in 70's. Of course banks, trade companies, etc love them but there are other who also need uninterrupted 24x7 operations, manufacturing also loves NonStop systems in such cases. They run warehouses, just in time manufacturing, robots in bank vaults no person ever allowed (yes, there are such), etc. Unfortunate how short sighted the IT industry often is - as John Benson says, some system I designed in 80's are still running and scaling up with new systems without any code changes. User interfaces may and will change but the core components don't have to. About SCOBOL and TAL (Tandem languages) both are still very versatile - TAL (based on Algol syntax) is what C, done right, should have been. 300K+ is really not much when you talk about the whole business, it just looks big in one check but compared to a whole IT project over xx years? Best ROI you may get - often, sometimes? And security - try NonStop or a mainframe!
>> Programs written in the 80's are still earning their keep.
Mostly because no one can figure out exactly what it does so you don't have any choice but to keep running it!
It all adds up
So you're paying $150K each for a couple Itanics. Where's the news in that? Does the FPU still suck royally?
RE: "....check the specs...they put the defective chips in the Non-Stop blades ...."
Nice to see the FUDders staying true to the old sales tactics - lie, lie, lie! You can purchase Itanium CPUs for the HP Integrity server range in a number of speeds and with different cache sizes depending on your requirements, ranging from 1.4GHz dualies with 12MB cache up to 1.6GHz dualies with 24MB cache.
However, for the BL860c, they currently only support up to the 1.6GHz dualies with18MB cache, which explains why they are in the NonStop bundle. Nothing at all to do with "defective chips", gremlins, little green men, or other flights of fantasy!
so why did they disable 25% of the cache?
If I hadn't been tweaking old COBOL and TAL code for decades in response to changing business conditions, I'd be inclined to agree. However, I ***have*** been looking at old Tandem programs, understanding enough to modify them, retest and send them back into battle: we're not talking orphan objects here. The complexity usually comes more from years of accumulated real world adjustments than spaghetti coding. I will grant that there must be pathological cases in which people won't go near a critical module for fear of breaking the system, but I don't see those because I don't get hired to work on them.
Would I rather rewrite the old Tandem apps completely? Yes. Would I choose different languages? Yes. Would my clients consider the total cost and associated risk justified? Almost never.
But speculating on variations in application code quality misses the point, which is: the Tandem architecture provides scalability, data integrity and uptime that other systems simply can't touch. When you consider that there has been only one major architectural change in the system since the late 70's (K-series to S-series) apart from the normal Moore's Law upgrades, that's pretty amazing.
Does rehosting on Itanium make a difference? Not to me as a legacy maintenance programmer. Neither did the move from proprietary CISC CPUs to MIPS RISC, or the failed attempt to move from MIPS RISC to the DEC Alpha of blessed memory. Since Pathway and SQL were put into place in the early eighties, Tandems have always programmed the same for the most part. System managers and the people who pay for the hardware care about the hardware changes in order to achieve their required Transactions Per Second capacity but are quite happy that they have never had to suffer through the software equivalent of a forklift upgrade. Tweaks and minor conversions, yes, but never a "bet your business" kind of upgrade.
Of course, I do other stuff on the side as well as on my own time, and I'm sure that truck drivers like to drive sports cars now and then as well.
As far as I know, the Tandem is the only long-lived commercial system designed from the get-go for NonStop operation, data integrity and extreme linear scalability (no SMP "knees" in the power versus CPUs curve). The architecture reflects these baked-in requirements, and the OS and middleware take advantage of them. Once you're used to this kind of engineering, almost everything else looks like improvisation.
Yeah...so why did they disable 25% of the cache on the chip?
Seems to me you would expect Non-Stop hardware to get the best chips
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Hi-torque tank engines: EXTREME car hacking with The Register
- Review What's MISSING on Amazon Fire Phone... and why it WON'T set the world alight
- Product round-up Trousers down for six of the best affordable Androids
- Why did it take antivirus giants YEARS to drill into super-scary Regin? Symantec responds...