26 posts • joined 6 Jun 2007
Re: Matt Bryant
ZFS doesn't use the extra resources for nothing, because it provides end-to-end data integrity, so it can provide the reliability similar to a EMC/NetApp box using all standard components at a fraction of the cost. Since you don't understand ZFS at all, let alone how file-systems work, you should have already known that performance on a single disk is not ZFS's focal point, the bigger the number of disks, the scaling and reliability of ZFS shines.
I meant NetApp approached StorageTek to buy StorageTek patents, but you very well knew what I meant. So try another route. The facts are clear, when NetApp couldn't get access to StorageTek patents, they sued Sun in the pretext of ZFS, hoping Sun would simply hand over the StorageTek patents for free. Sun as a company never initiated any patent conversation with Sun.
"Sun have NEVER made any such statement I have seen" - Nice try. That's the reason I said ignorance reins supreme in you. If you didn't see something, how are you confident enough to proclaim your lies !!
I think you are a real anti-Sun fanatic. Did you actually see what those 1600 patents are - before claiming they are related to solaris x86 code ? No, most of them are related to fundamentals in computing. And Sun did make the statement that they made them available to any open source code. At least these patents are way more valuable than IBM's patent publicity stunts that even included patents related to screwdrivers!! Again, the fact is that Sun has never sued any open source product, NetApp did. So keep your conspiracy theories to yourself until you show evidence as of today.
"but happily signed up to allow M$ to sue open source users of the Open Office code in future?" - Oh... sure you seem to have been the third party verifier for that agreement. Care to provide reference, keep invent lies after lies.
"using high-speed memory as cache in front of disk arrays has been around for years, long before ZFS was even created" - I think someone should really ask you not to try to crack something that's beyond your area of expertise. A flash is not a high speed memory like RAM, and secondly it's persistent memory, unlike RAM. There is a different functionality and economics involved because you can add 32GB of flash cache at almost negligible relative cost addition and ZFS can speed up the read/write transactions by an order of magnitude. It's not something Sun had to invent, the fact is that the architecture in ZFS allows this speedup with no changes in it's architecture. BTW, can you care to do some research and provide reference here who is trying to use SSD as disk cache, try hiding behind those kind of pretexts. It's one thing putting SSD in place of disks and another thing to have an architecture in place that can use SSD where it really shines.
"NetApp see a company copying the core code from their product, then open-sourcing it, and then that same company tries to rip them off with a dodgy patent mugging, and you suggest they just ignore it?" - as I said earlier, the real reason NetApp sued is to get free access to StorageTek patents. And what core code ? The court says otherwise. The patent office is not sure the so called 'core code' has plenty of prior art, and now those patents are on their way to oblivion.
Ohhh, so you did seem to be finding time to actually look for what Linux is saying, and oh my !! He seems to find ZFS interesting enough for Linux - doesn't that contradict what you have been saying so far. I am sure he made that statement before the Sun/NetApp lawsuit, because if he knew, he wouldn't have made that statement. A company like NetApp would only open source code relevant to their products, not their proprietary WAFL code, because Dave Hitz has now declared in court that doing so would jeopardize their existence because it would allow smaller vendors to offer the same features at much cheaper entry point. So now you feel the frustration Linus has ? He can't get ZFS, and WAFL is a far far cry.
Your quite large commercial enterprise must be happy with you for saving them money I am sure - you must be a religious follower of the pledge - nobody gets fired for buying IBM. The companies I mostly work are not so rich they don't even have money to buy NetApp or EMC or Hitachi.
RE: Matt Bryant - Ignorance is SUPREME
Try again, ZFS does not need much more resources than a standard x86 box, it just needs tiny bit more memory, so your hypothesis that you can't run zfs in ANY x86 box is blatant lie - it's called distortion of facts. But why am I complaining, because nothing else is expected from someone who never tried it and yet claims to possess all knowledge about it. But again, ignorance is bliss.
No, Sun didn't kick off the litigation, they were only continuing the patent discussions that were already ongoing between Storagetek and Sun. The fact is that NetApp had approached NetApp with interest in purchasing some of Storagetek's patents, and storagetek wan't ready to sell them. Once NetApp saw Sun buying storagetek, they saw this as an opportunity to not pay for the patents that they were in discussion with the old company and instead go to courtroom.
"Sun have NEVER made any such statement " - again since you are supremely ignorant, only GOD can help. But hey, why don't you quote your sources when claiming some of your blatant and distorted proclamations!
Sun has never filed a lawsuit against anything open source irrespective of license - that is the fact today, so keep your speculation and ramblings to yourself until you can prove (not state) otherwise.
No, NetApp should not be worried about Sun and ZFS, they need to be worried about the smaller vendors building ultra-cheap AND RELIABLE storage boxes, and SSD provides that opening where these boxes can have majority of the advantages WAFL provides. Yes, SSD isn't cheap, but ZFS's planned usage of SSD (or rather flash storage) is not as primary storage, but as an additional level of cache, very similar to how NVRAM is used by NetApp. So it's still very very cheap and lifts performance an order of magnitude. Not every application needs the expensive storage arrays from NetApp, EMC where you have to pay huge premiums for every additional TB you use - but hey !! these are not my words, that's Dave Hitz declaring in court - or you can claim he may be just joking...
But you are quite happy in your own little world of Itanic,HP-UX,EVA.... be happy. There is a big world of new Web 2.0 crowd unfolding before your eyes, and these companies don't need EMC, NetApp for their core business, cheap reliable storage is all they need. That is were ZFS shines, but you wouldn't pretend to understand that anyway...
So yeah, you keep hoping Sun would disappear one day, HOPE is what keeps us afloat.
RE: Matt Bryant - Grapes are sour indeed
How unwanted ZFS is ? Wouldn't that be the most plausible recourse when you don't get those features for free in another way. FreeNFS 0.7 with ZFS in still in beta, obviously the most downloaded would be the released version. ZFS is inevitable where needed, biased judgments notwithstanding.
In relation with the Sun/NetApp suite, why is it important for IBM/HP to be in the picture. You can look at the state of the suite, all related NetApp patents are being re-examined, one already rejected; while NetApp hasn't yet challenged any of Sun's 22 counter patents.The fact is, instead of innovating further NetApp chose to engage in lawsuit and obviously expose themselves against Sun's much more extensive defensive patent portfolio. The reality is, Sun is one of the few vendors who have publicly pledged not to use patents against open source, in fact they paid huge sum of money to Kodak until recently for related Java patents - so it's quite obvious who the patent troll is. NetApp thought they would get a quick injunction against ZFS when they started the lawsuit, now they are the one facing a protracted court-battle (Ref: read Dave Hitz's court submission). Sun couldn't care any less. And now with Flash based SSD, ZFS is going to provide majority of features that NVRAM backed WAFL provides, at a tiny fraction of the cost. It's clear why NetApp started all this - they thought the courtroom is the easy way, instead of innovating further and raising the bar.
Re: Matt Bryant
Mind you, whether Sun is ripping NetApp off is still under court trial, and if you go by the latest court updates, NetApp has already lost one parent, and the other five are all under patent reexamination based on prior art submission. We don't yet have complete information whether the claims patented by NetApp would prevail. It might well be that NetApp only did the first commercial implementation of ideas which were already known and ZFS derived from them. In addition NetApp has also been alleged to have ripped Sun off 22 patents, and they have to defend those claims as well. Basically your zealous attitude towards ZFS is quite apparent because no other commercial OS has such a bundled feature which is free, including Linux and your beloved HP-UX. I know you would bring up Linux + volume manager, which is understandable since logic and Matt Bryant don't go together.
Simon's ZFS system is a great example about how a home fileserver based upon ZFS can be built easily. Now, since you know nothing about ZFS or as long as you pretend it's not worthy of your attention, you may well be very happy with buying a $50 NAS box and export them as FAT32. There are plenty of advanced home users who may want to aggregate multiple disks under one zfs storage pool with mirrors, sparing, compression, failover all bundled in, create zfs filesystems in them and export them for use as NFS or CIFS shares. snapshots and incremental backups may not be what you need, but obviously many would appreciate them. I am also sure that you don't know much about FreeNAS as well, because FreeNAS 0.7 has not ZFS integrated, which is quite fantastic - again they are complementary, not in competition.
For other readers, there is a pretty good article here, great read for people who understand:
Why ZFS doesn't stand a chance
Why ? Because Matt Bryant doesn't want to know about it. Ignorance is bliss.
Why ZFS isn't needed ? Because, to quote from Dave Hitz, Netapp:
"...because ZFS is open-sourced, it lowers the barier to entry for startup
companies to bring products incorporating ZFS technology to market and start competing with NetApp. Indeed, because Sun is distributing ZFS at no cost, it dramatically lowers the product development costs for any company, not just startups."
May be HP's opensourcing of AdvFS will actually help the case for ZFS in finding more prior art on the claims NetApp patents are based upon.
In regard to SAMBA vs. ZFS, anyone with a sane mind would know that they don't compete. ZFS is a filesystem/volume management entity whereas SAMBA is a piece of software for file-serving over SMB/CIFS protocol. There are instances of SAMBA over ZFS in production environment, google on 'samba zfs' has the first hit:
Simon Breden may have used a $900 system, but chances are it would work on most systems as long as the network card and disk controllers are supported. My $200 emachine system with a sempron/1G RAM/320G HD worked fine after I plugged in the network card that came with my DSL kit from AT&T. Plugged in two more 500G drives and 2 case fans to keep them cool, and I have a pretty good home file server for the price. It only needs 80W idle and < 100Watts when loaded. Went ahead and bought a 8G CF card for $35 and a CF/IDE adapter from EBay for $10 incl shipping and installed solaris on it. It has quite a few of NFS & CIFS shares (with SAMBA). Solaris is still bloated by a large margin compared to Linux and has no appliance like packaging, ok but I can easily spare 8G any day to get the full bloat of install and just use the ZFS features for the functionality I get with it. Now, only if I could get a NetApp filer or a Veritas suites, I would dump ZFS in no time.
ZFS is reaching people who don't want to pay for NetApp/Veritas. More add-ons are on the way, here is another ad. for ZFS:
AdvFS in LInux
I won't bother replying to Matt Bryant because he is so full of crap.
Back to the subject of discussion, it appears that HP actually made earnest effort to port AdvFS to HP-UX, and the much adorable TruCluster which actually people using Tru64 seem to prefer lot more over Veritas. Unfortunately, political issues aside, it's one thing to make code changes, and completely another thing to actually productize a complex filesystem and clustering products. Back during Compaq merger HP promised to fold these features into HP-UX, but they failed after more than 3 years of money and effort. There are conspiracy theories that Veritas offered them a better financial deal in terms of money and effort required to actually productize the code, the fact however remains that customers have to spend extra to get the Veritas software, in alternative to HP-UX itself having an integrated well engineered and truly admired clustered filesystem - basically shifting the burden the customers.
Anyway, as per the actual value of the AdvFS source release is concerned, by admission from HP's own engineers, it's unlikely AdvFS on Linux will ever be reality. They hope the ideas and the documentation and algorithms will provide insight into future development of enterprise grade filesystems in Linux (may be in 2011-2012). This is very different from Linux actually getting an enterprise ready filesystem. So the relevance of the source code release is actually 0 - sounds like, "Hey, if you are a geek and want to see how a great filesystem is supposed to be implemented, here have a look. You want it on Linux ? Sure, I can explain how it works here and there....you expect us to productize it on Linux ? Forget about it, we couldn't do it on our own Unix ? But be my guest"
All said, kudos to HP for making it free, I wonder why they didn't do it for TrueCluster also.
Who cares ?
HP rejected it (or gave up) trying to port it to HP-UX, that itself speaks volume about either relevance of AdvFS today or HP's credibility. Now they want to hurl it at Linux, hoping the Linux community will succeed in the dirty work in actually making it work. The BIG question is, did HP actually make it work on Linux, or they just released the code from TRU64 Unix under GPL2 and leaving the actual work to be done by the Linux folks. It sounds more like the later, which does confirm the fact that HP actually thinks that Linux folks are a lot cleverer than the group at home doing HP-UX (no offence intended). Why not just get rid of HP-UX altogether and just concentrate on Linux on integrity, it would actually elevate their status as the true Linux community leader. Transitive is their friend to make sure the legacy HP-UX apps would still run on top of Linux and Integrity.
Re: Matt Bryant
Now we are sure I am beating the hammer against an empty brain, but just to confirm my suspicion, again let me ask you. How many HP-UX licenses does HP sell in a quarter ? I did hear all your empty boasting sounds about how great HP's products are (and trust me, they don't ring a bell in my ear any more). With so much Integrity sales, what's the rise in "HP-UX shipments" ? Do you have a figure ? The original topic is always about solaris, not about the greatest of HP products that we are already accustomed to hearing from you over and over again, it's just getting monotonous now. Yes, HP is doing great, everything in HP can't be better. Now would you mind shading some light on how rapidly HP is increasing HP-UX licenses. We all know from you know that Linux is doing great, Windows is doing great AND solaris is gone. I think I already know what to expect to hear from you:
- Integrity is rapidly increasing
(let's just pretend PA-RISC doesn't matter)
- SPARC is rapidly diminishing
- Proliant is awesome
- HP-UX is great
- Solaris is shit AND dead
- Linux is awesome
- HP is the greatest Linux contributor
- Entire HP is awesome
- Everything HP makes will conquer the world
- Nothing can be better than HP
- HP built all their x86 from scratch, Compaq didn't exist
- HP-UX needed nothing from Digital Unix
- I just hope Sun didn't exist at all
- Everything in Sun is shit
Anything else ?
Re: Matt Bryant
Here goes the HP marketing bullshit at full swing, you can't really hide your ties to HP marketing however hard you try. Can you go back to the topic and answer why HP-UX is thriving ? Do you have IDC/Gartner data that you so confidently base your rant on (without any pointer, that's the trick isn't it ?) to show how rapidly HP-UX shipments are increasing ? It's a good thing that Solaris ships on another vendors SPARC boxes, unfortunately HP can't claim that. HP-UX does not run on the 'vibrant ecosystem' of Itanic vendors' systems. To make matter worse, even >30% of HP's own Integrity customers opt to run Windows instead of HP-UX, and run linux, however negligible that may be on Integrity. Paid solaris licenses still outship HP-UX by a healthy margin. How many HP-UX licenses does HP ship per quarter, I bet the number is less than 20000 - you call that a thriving OS ? Compare that to > 60000 paid licenses for solaris per quarter. Isn't it funny your comments finally boil down to how great HP's entire product portfolio are, no matter whether they have any relevance to the topic in question or not ? No you are not blinded by HP marketing, you are paid to do just that.
Wow, Matt is on a Roll here. Of course these are the kind of spams in the newsposts who would like to rejoice spewing venom at Sun and celebrating victory to Linux, notwithstanding the fact that these people know shit about anything or has zero contribution whatsoever to the cause of Linux itself.
All they want to say is that Linux has grabbed share from Sun, leaving it to the dust and just spewing lies after lies. While the fact is that Linux is not the reason Sun has lagged, the reason is Sun itself. While solaris had been the best known Unix for the 90s college grads, Sun was resting on it's laurels selling big hardware and ignoring the x86 market altogether. Would you say AIX and HP-UX are thriving - not at all. They are only holding up to the enterprise market where their hardware had been traditionally strong, what's their shipment of OS licenses ?
To say that Linux does not need ZFS or dtrace 'like' functionality basically proves how much away from the truth you are, in addition to how little technical knowledge you have. One would wonder why Apple with all their might would even bother to spend any energy porting ZFS & dtrace to Mac. Why would the staunch Sun competitors - IBM/DELL/Fujitsu would even bother to spend any energy certifying solaris x86 on their boxes if Linux was all they needed ?
Why would HP list Solaris x86 as a supported OS for most of their x86 servers ? Why is it that they do not want to give Sun an edge to claim that their irrelevant solaris x86 is supported only on Sun hardware ? Why would Oracle bother to have their databases available on solaris x86 - a completely irrelevant OS ? Linux has to keep improving to remain relevant, it just puts a big doubt on your credibility to say Linux does not need any more feature. You have to remember that Linux started at grass-root developer level, and Linux will need outstanding features that a common developer can relate to. Just having IBM/NetApp/HP/Oracle et al. doing enough to make sure Linux runs well on their expensive gears does little to improve it's appeal to the huge community of selfless individual Linux contributors.
You may be right in saying Sun does not get open source, but your rants discrediting ZFS and dtrace because it comes from the company you love to hate only proves one point - you have nothing to do with Linux or open source, you only have a very personal agenda against one company and that is all. By the way, have you checked the latest status of Sun/Netapp dispute - check it out and educate yourself a little before posting about ZFS again - who knows it might even give you some credibility.
This is more like idataplex competitor aimed at ultra high compute density market. Each blade can have only 8GB max RAM to feed 8 cores, and little IO, but you get so many of these discrete 2-socket physical servers in little space. Most HPC/web2.0 customers probably do not need a lot of memory so this may work out well for them. HP is charging quite a premium for this kind of density with this solution. While idataplex has liquid cooling integrated, it would be interesting to know whether there is any integrated liquid cooling at work here.
"I think you'll find NetApp aren't scared at all, they're just p*ssed off with Sun pinching their work"
Yawn....the true analogy is about a misfit granted exclusive right to idea that should not have been patentable at all. Now that Netapp finds that the cash flow may dry up slowly with users getting most of the usefulness of WAFL and a lot more for free and on any commodity hardware. The days of milking the cow for ever without feeding it are coming to a close, just watch how these patents get invalidated one after the another. Netapp are scared of competing in the open, they want a sanctuary where no one can come in !
"Like many Sunshiners you also massively missed the point - most customers just aren't interested into thin clients, they'd rather have full Windows desktops because it makes their workers more productive. And if they want to go thin they want thin Windows clients, usually via VMware, not Solaris ones. Screaming that your poo is better than anyone elses just because the guy next to you is selling hotcakes does not make your poo sellable, it just means your product stinks and the hotcakes will sell. If your poo is beta or prerelease poo (vomit?) it doesn't make it any more attractive to the average hotcakes buyer."
That's like howling at the moon. Vast majority of customers, specially big corporations who use windows today on desktops really don't care whether they have windows on their desktop or a thin client as long as the operation, maintenance and administrative costs are low. That's where desktop virtualization delivered over thin client comes handy. Anyone with half a brain will understand that any desktop virtualization strategy, be it Vmware, Citrix, Sun is about delivering the applications into a desktop or thin client, and if these applications are on windows, so be it. There are always users who need the whole desktop power right on the desktop, but there are many other who don't. There is nothing about solaris on any desktop virtualization.
"And for the Colonel, I think you'll hp-ux had npars and vpars (hardware and software partitioning) available to buy before Slowaris or AIX had true partitioning."
Very true, But IBM micro-partitioning being so much better in every way, it just does not matter that HP had npar/vpar for a long time. Only recently HP came up with Integrity VM, which I believe is a VMWare like true virtualization solution for running Windows, Linux and HP-UX on their Itanics. Not to mention HP-UX shipments are so much lower compared to AIX or solaris. With XVM targetting x86 based on Xen para-virtualization approach and supporting Linux and Windows on x86, it's suddenly much bigger a market and quite more interesting.
HP-UX is dying
The reality speaks otherwise. HP shipped only 15,828 boxes with HP-UX boxes while Sun sold 56,339 SPARC boxes with solaris. While Sun's shipment is 10% down it's revenue is marginally up while HP's revenue is marginally down. It's clear most of the Integrity/itanics are shipping with Windows/Linux. With a shipment of only 15000 boxes, it's pretty clear which Unix is dying, HP-UX will die sooner than Itanic itself !! HP wouldn't care because it makes ton more money on x86.
"I don't bother lookign up the MTBF for LAN cards nowadays, it's very high, but I still put at least two in my servers to ensure that if one dies (and Murphy's Law dictates the likelyhood increases with the latest of the hour, it being a weekend, and your CIO being heavily involved in delivering the project!) I don't lose access to my server. If the LAN NEM in your 8000p chassis fails - whch would take out both ports - and you have a SAN module in the other slot, then your servers just became said expensive heaters."
Yeah, may be you need some education about the MTBF numbers of components when thinking about redundancy. And you probably also need to learn the effect of a component failure on the running system. Because, when a LAN card fails, you are pretty much assured that the OS is likely not in a good state to run your applications, whether you have a redundant network or not. So you have a redundant blade out there already, you surely thought about that, didn't you ?
"Having seen your tenuous grasp on enterprise computing, I'm guessing you copied that from a more knowledgeable aquaintance. As usual, your "rebuttals" are just insults, and carry no technical weight. Try thinking up a counter-argument for a change."
Yeah, when you speak in that language without quantifying your arguments, you are talking to a deaf ear - no insult there. What we are seeing from you end are not point by point counter arguments, but blind love for HP.
"Strangely, the market doesn't seem to think so, and I've yet to run into a situation with HP blades where memory size was an issue. I take it "It's got lots of memory" is going to be the Sun feature sell on this then? Yeah, that'll work...."
So may be you can give feedback to HP to cut down on the DIMM slots on DL585/DL580 as well, because YOU never needed that in your work. May be it's high time you also try to understand economics of memory in servers, and how server virtualization trend is fueling up the need for more memory. Oh yeah, why did HP need to put 24 DIMM slots on their 4-socket Itanium blade and make it double wide ? They could surely make a single width blade with 4 itanics and 16 DIMM slots and stuff in more blades there !!
"If by "constellation" (Sunshiner marketing codename?) you mean the 8000 and 8000p then you're shovelling that male bovine manure again - the quad-quad x8450 blades won't fit into the 6000 chassis. Mind you I thought "constellation" was the codename for the 6000 chassis, so your statement is almost as ludicrous as the rest of your reply."
That's the reason you need to shut your mouth before speaking up and try to at-least do some basic research before making a counter-argument. You have become so arrogant that you don't find the need to familiarize yourself before talking on a subject. So you can comment on something because you 'thought' something, anyway, I don't expect you to do any homework based on your history of a staunch and arrogant sun basher, and a flamboyant HP fanboy. Short story, Constellation is not Sun blade 6000. It's a HPC rack that holds 4 rows of 6000 series (10U) blades, each row holding 12 blades. So, in one rack you can put 4x12=48 blades.The blades you put are the same blades you can put in 6000 chassis. For TACC, the blade is called x6420 (The details are on the TACC website). This is a 10U blade that can hold 4 AMD sockets and 32 DIMM slots. There is no switch on the rack each blade is connected directly with the giant magnum IB switch with a special 3-1 IB splitter cable for minimizing cable clutter. Since there is no intermediate switch, node to node latency is the minimum possible for an IB network.
TACC uses pre-release barcelona chips on x6420 . So expect to see the same blades for 6000 chassis once fixed barcelona chips are out. And of course, there is no reason to believe why similar blades with Xeon should not be available shortly thereafter.
So yes, please re-do the accounting. How many 4-socket bladed can HP fit in a rack ? How many total IB switches are needed ? Do the HP fanboy math. Don't show me the core count with the half hight toys, the solution need 4-socket ones here.
"You obviously do not work in high-availability environments, or even remotely administered environments. If you have a single LAN NEM and it fails, you lose LAN access to the blades.."
On the contrary I am certain you don't seem to understand the failure points in a network. There are two LAN ports in each NEM, and the NEM is not the likely component to fail. The two ports on the NEM can be configured as redundant network and if one 'link' get's clogged, the other port can provide redundancy.
"Erm.... it's also superior to the Sun solution, so superior that (in my opinion) Dell have cloned .."
HP fanboy speaking > /dev/null
So superior is the blade design that they can't put in enough memory and I/O for a 4-socket server ? Even 2-socket full heights don't have enough DIMM slots !!
Need half height toys for running edge apps ? - Choose HP blade - perfect !!
Feel free to ignore to yor advantage.
"If I'm translating that correctly, you're saying that not being able to fit the Sun dualies and quads in the same chassis was an intentional Sun ploy to steal a massive advantage?"
Read correctly, the blades in constellation chassis and 6000 chassis are inter-changeable.
"And have you seen the TACC Ranger system that uses the x6420? Every third rack is a cooler rack. So any arguments on rack density have to be reduced by a third. Try again!"
Now are you going to tell me HP's blade cooling would take the heat out and replace with cold shower ? Of course HP blades can't even dream of getting close to the density of constellation solution - calculate for yourself and you would know. (forget the cabling and latency) Oh yeah, I know, HP's magical 'active cooling' exhumes all the heat from the blade and then sends out chilled air around to keep the datacenter cool- isn't it ?
Matt, yes you are right about having only 2 NEM on 8000 P chassis, but I don't see how you can claim that 2 ports on 1 NEM does not provide redundancy. For one thing I have seen any I/O module failure in extremely rare cases, and even when they blade, the server crashes most of the time. The most obvious failure points in any network are what comes between the two server ports, i.e. the cables, switch ports and the switches etc. and I claim that having redundant connections through the two ports does not impose any practical restrictions. On the other hand, it's much easier to service the practically rare case when a NEM really fails, because it's only a hot-plug operation.
"If that wasn't true than Sun would be the number one blade vendor and HP would be the struggling laggard. But then HP is the number one blade vendor....."
The real reason HP is the number one blade vendor is because HP has a superior solution compared to the other upstart IBM in the blade segment. That compounds with other reasons like having the strongest x86 clout, complete product portfolio and software solutions etc. The sheer market position does not make their product superior to every other vendor in every aspect that's relevant.
"Yes, adding redundancy and resilience does tend to make things complicated,..."
Wow, what an argument. The most elegant design is always the simplest one.
Anyway, your original argument was about Sun not having a 4-socket blade in the same 6000 class chassis, you didn't respond to my question why HP's 4-socket blades don't have better I/O and memory capacity as their rackmounts do. I claimed that Sun has intentionally positioned their commercial 4-socket blades into a bigger 8000 class chassis that can deal with the requirements of a properly equipped 4-socket server as far as I/O and memory capacity goes. It's likely Sun's position of market segregation that they don't productize their ultra dense 10-U x6420 blade which can hold 4 sockets and 32 DIMMs for the commercial market.
The least you can do when beating your HP drum is to stop spreading incorrect information. Please read carefully,because you seem to have little knowledge about the Sun blades and feel free to dispute as always:
"to make the 8000P Sun had to hack off the redundant PSUs off the top of the 8000"
Yes, 8000 series is meant for enterprise deployment with N+N redundancy. 8000 P is geared towards high performance denser deployment that does not need full N+N power redundancy. 8000 P is N+1 (3+1) redundant, but with enough power for 10 fully populated blades.
"and only put in two IO modules, so you can't have redundant LAN and redundant SAN"
This is incorrect. The only I/O the 8000 P takes aways is the ability to plug in the express modules (Each blade in 8000 series can have up to two unique PCI express modules (each typically two ports) (Dream about this capability in the HP blades case), the 8000 P series only takes away this capability.) Otherwise both 8000 and 8000 P provides for 4 (that is FOUR) Network Express Modules. Each NEM gives two ports for EACH blade. So yes, you can have 4 LAN ports AND 4 SAN ports for each blade even in 8000 P (or mix and match).
"Not much point having loads of blades if you can't power them and deploy them in a redundant manner, and can't get any LAN or SAN bandwidth!"
As explained earlier, there is still a lot more bandwidth per blade in the 8000 P series blades AND redundancy (compared to HP).
"That's even assuming the 8000P can power ten quad-quad Xeon blade"
There is enough power for the highest bin Opteron and Xeon based 10 fully populated blades.
"if you don't need the extra PSUs in the 8000 why are they there?"
Of course, not every application (e.g. HPC grids) need N+N redundancy. Even for rack servers there are options to select the number of Power supplies and the customer chooses them based on their requirements.
"Looks to me the 8000P is just more Sun desperation to try and make their fat chassis look more competitive, but in making it thinner they've made it totally unresilient and limited,"
FUD again. Not every customer needs the full power of 8000 series.8000 P is an option to have a denser deployment of the same blades when full N+N power redundancy and unique I/O per blade is not needed.
"c7000 offers the same eight IO switch slots and power redundancy with the quad-quad blades as it does with the dualies in the same rack space."
The HP 4-socket x86 blades are still no match against the Sun 4-socket blades in terms of memory capacity AND I/O bandwidth. The HP chassis is overly complex in terms of how components need to be plugged in and administered.
It boils down to how each company views the 4-socket blade market. You as an HP fanboy will surely beat the HP drum and pooh pooh Sun to match your agenda. It's perfectly logical to think that 4-socket blades are typically a small fraction of the overall blade market, and there is nothing wrong in consolidating the 4-socket blades in a chassis which can match more closely to the memory and IO capability of a 4-socket server,specially when the management is typically the same. Does HP's 4-socket rackmount servers have the same rackspace, memory and I/O capacities as their 2-socket ones ? Is that the case with HP 4-socket blades too ?
RE: Netra T2 is nice step forward
"Erm.... I did.... guess you just had a problem comprehending it. Tell you what, you tell me what your native language is and I'll try and convert the response via Babelfish in very easy (non-technical) words."
I think it's you having a problem comprehending, otherwise by this time you would have come up with some reference to substantiate your bullshitting.
"Well, like you said, it carried on growing in revenue, whilst at the same time Sun's didn't."
You call that growth !! After many consecutive years of decline a slight blip does look like growth to you.
"And HP had a very healthy x86 bizz before the Compaq merger, it was called Netserver, and it was bigger then than Sun's tiny slice of x86 is now..."
Very true. Except that the Netserver business was DYING FAST against DELL and Compaq, whereas Sun's tiny x86 business is growing FAST even with all the x86 giants. So HP had to buy Compaq, ditched Netserver and threw up all the towel on Proliant.
"Actually I stated an example of where real life testing backed up a published SPEC benchmark" ...blah blah blah...
That's nice, YOUR "real life testing" !! And no public data to validate !! I think you think everyone is a fool,or stupid as you are.
"Sun's SPARC business has been in decline since before they canned UltraSPARCV. It has become such a complete shambles they have had to buy an aging design..."
What a joke !! So the Itanium design is so very modern that every major server vendor abandons it except the ones who have no option but to cling to it !! The HPC vendor who bets on it goes bankrupt and has to switch to x86 to survive. There is nothing wrong in Sun in embracing x86 since no one can survive without it - it's called being pragmatic. What Intel is doing with Itanic is the last ditch efforts to save the prestige, and HP high end business has nowhere to go without it.
"Please use the "Joke Alert" icon for your future postings as they have zero technical or commercial content and only serve to show you for the fool you are."
Does that really matter ? Specially, when you act stupid and pretend to ignore the inevitable !!
RE: Netra T2 is nice step forward
"denial is not a very good counter-argument, it just sounds childish and petulant. Just ask your Sun salesgrunt what the dominant software stack is in telco billing and on which platform, and after much grimacing and grunting he'll have to admit it's AMDOCS on hp-ux, usually with Oracle. Not Slowaris, not MySQL, and definitely not T2, T1 or UltraSPANKED."
Oh yeah, throw us some proofs instead of patching up with another set of lies. HP-UX is dying, the sooner the better.
"In the scale of the enterprise market even just here in the UK, that's peanuts. And that's before you consider that the ProLiant bizz for HP worldwide is probably better than $285m in a fortnight! In Q2 07 alone HP shipped 22.3 times as many x86-64 servers as Sun. That statement is from IDC, but please feel free to call them liars."
Again, don't try to circumvent the argument. It's about shipping the non-x86 boxes and the profit margin on the systems. Show us how the non-x86 business is doing for HP. What a shame that HP can translate their huge (compaq inherited) x86 presence into meaningful non-x86 sales growth !!
" Last year I put rx2660s up against T2000 for one of our projects. We went to a shoot-out ...."
So now we have to rely on Bryan Matty reference benchmark instead of SPEC/TPC/SAP-SD....benchmarks !! Talk about how to keep throwing meaningless numbers without any substantiation.
"Boom, boom, boom goes the HP drum. Whine, whine, whine goes the Sunshiner."
Or rather, "Sink Sink the Itanic with HP fanboys in grim, up goes SPARC and the Sun"
RE: Netra T2 is nice step forward
" Yes, a very big market already dominated by Wintel/Lintel, with no sign of T2 or any Sun chip making even the slightest dent. Have you considered the possible market for really heavy paperweights?"
Guess what ? The T1/T2 line is now $285mil/qtr business with profit margin that the x86 vendors can only dream of ! It's good business.
"Which is a complete evasion of the question.
What's needed is not eight different memory buses. T1/T2 have four different on-chip memory controllers with ample dedicated memory bandwidth, they don't need to load/store at the same time, even a computer semi-literate can say that.
The design trades latency for bandwidth. Anyway, all arguments can't ring a bell into your tiny brain, public benchmarks substantiate what I mean.
Any why did you evade my question why Itanium needs such massive caches to come not even close to Core2 performance. What mighty chip is slowly dwindling towards extinction. With all might HP had, they managed to grow only 1% in the
HP-UX/PA-RISC/Itanium/Alpha business - it's time for Intel to abandon the loosing chip. The countdown starts now, as the HP/Intel Itanium supply negotiation comes to tragic ends, and Intel slowly clearing the way with CSI aka Quickpath Xeon/Itanium socket compatibility.
" OK, here's a project I worked on not too long ago ....<lies lies lies>"
Again, provide some publicly available benchmarks. With the credentials you have here, no one will believe your lies, specially with your past history of HP drumbeating.
" Actually, it won't, and by a large margin. Seeing as I have seen the proof of this first hand I think I can fairly tell you you are talking out of your rectum on that one (and probably in parallel at the same time!)."
ECHO BACK. Again, point to publicly available results/indications/Intel claims ...whatever - we don't want the noise from the gory holes of your body.
Keep beating the HP drum fanboy,, and sun-bashing for good - someone or the other will surely pay attention.
RE: Netra T2 is nice step forward
"Let me take a wild guess - webserving, and definately not real enterprise workloads like Oracle."
Yes, real enterprise workloads. BTW, web servers, file and prints servers is a very big market. T1/T2 excels in workloads you didn't think. Go check for yourself, I'll give you an idea, start with the SAP-SD scores on T2 here.
"Intel have developed a massive advantage in the area"
Yes, can you explain why the Core2 based chips needs only 4MB cache to outperform a Montecito with 24MB cache in pretty much every benchmark with half the number of threads and 1/6th the cache ? Come to the point Matt, Intel cache designs is the best in the Industry - but now Intel can't hide the facts that these dumb IA-64 designs needs shitloads of cache to even come close to competing offerings !! IA-64 inherits it's design philosophy on the 90s workstation computing, it's a wrong product for the wrong market. The mighty chip which was supposed to conquer the world is just an also run, fighting for survival. Today nobody needs this chip except HP for the sole reason that it's enterprise existence depend on it.
"All down the same memory busses? Or are you now telling me each core hase eight memory busses, one for each thread?"
Now you have starting sounding ridiculous. When your background is not in computer architecture, at least you can stop pretending that you know about them and come back to the point, which is how these actually perform.
"Yes, in the manner that customers need for current enterprise apps like Oracle, SAP, Siebel, DB2, even MySQL! In short, Itanium, Xeon, Power and SPARC64 (hey. I'll throw you a bone) do more with their threads than the weenie threads of the T2."
Again, compare actual results. T2 has an entirely design philosophy and it excels in almost all thread-rich workloads, which are most server applications these days. That includes all these apps you point to. Again, I suggest you gather more information instead of sounding like a complete idiot.
"Which means it will be twice as useful, and not vapourware like Rock. Until (or should that be "if ever") Rock hits the streets, it would be better to stick to comparing T2 to low-power Xeon (and hope nobody notices the tenfold price difference!), rather than Tukzilla."
A 2-year old T2 will still outperform 2xTukwila in any of your enterprise apps, even with fraction of space and power. Today SPARC64 fares pretty well with Montecito, and the so called upgrade called montvale. When Tukwila comes, it will face 4-core SPARC64 chips.
RE: Netra T2 is nice step forward
"sixteen possibly concurrent threads is so much closer to the advertised figure of sixty-four"
Matt, I guess you really could use some education regarding this. The T1/T2 lines are pure throughput processors, this is the only thing these are good at. They suck at pretty much anything that needs fast response time, but fortunately, they can handle very heavy load quite well. The T2 runs 16 concurrent threads out of 64 at any time - which isn't all that bad when you see that Montecito runs 2 out of 4 threads concurrently - but in vastly different ways. The T1/T2 cycles through the runnable threads every cycle, unlike montecito which lets one thread run until it gets stalled. Tukwila will also run only 4 out of 8 threads at the same time.
You can argue many moot technical points like cache hits ratio or such, but the bottom-line is that the T2 can beat any other modern cpu in it's narrow territory of favored applications. I have seen figures where a single T2 beat 4-core power6 and 16-core Clovetowns in some popular server applications, even with these massively underpowered cores. This is because T1/T2 can do zero-cycle penalty thread switching which none of the other cpus can do - that's where these 64 threads are being useful, they are all doing load/store from memory in parallel.
Sun blade 8000 and 6000
Jack, the blade 8000 was introduced in July 2006, and now blade 6000 is introduced in June 2007. Are you saying that Sun had time to see whether blade 8000 was doing well and then start designing blade 6000 and do all these within a 11 month period ? That can't be true. They were probably working on both designs at the same time, but decided to bring the 8000 blade to market first with the resources they have (for whatever reason I don't know). Each of the servers are very well engineered designs, and there is no way you can do them in a short time, taking into account so many rigorous steps the Tier-1 OEMs take - design, verification, testing, beta testing and early access, certifications etc. There is no indication that Sun is going to EOL the blade 8000 design, both coexist - with blade 6000 aimed at the volume market, and blade 8000 being a niche product.
Both HP and IBM blade designs are centered around the chassis. Just because HP and IBM are doing it for years and are in their second generation design, does it mean that they have the best blade designs ? From product maturity perspective, I agree they are mature enough - but that can't be the the only reason why a new customer can't explore alternate designs and choose the one that suits them the best ? Please list me some reasons why you think that HP and IBM's second generation design is better than Sun's first generation design.
HP Virtual connect only fixes part of the problem, this feature is already built into the sun blade design (Ethernet NEM, and Fiber Channel NEM). Still, how do you service a blade when a IO card installed in one blade fails, it's cakewalk on Sun's blade chassis (hotplug from rear of chassis), not so on HP/IBM. The overall IO design is much more open in Sun's blade, you get to buy third party plug-in IO modules, no need for HP/IBM specific switches or mezzanine cards. I am pretty sure HP/IBM makes heck a lot more money on these added modules than on the blades alone. Why would the HP/IBM solutions be less expensive overall ? And what about the ablity to use unique IO card in each blade ? Certification aside, the Sun blade can easily integrate into a HP or IBM management infrastructure. Do you have any data to suggest that HP can indeed make a half height blade with 2 cpu sockets and 16 DIMM slots ? And power it and cool it effectively. And they can also easily do a 4-socket full height blade with 32 DIMM slots I guess :-) Sure DIMM density isn't everything, but it plays a good role when you want to quote a lower price on the RFQ.
With any new product, I don't deny it's easy to break in into a market owned by two giants. I truly believe Sun's blade 6000 is a breakaway design and very different from current offerings. I see no reason why a new RFP can simply skip over the Sun offering without doing a detailed cost benefit analysis, and basing the decision based on how long HP/IBM have been doing blades and how much of the market they control. I am not predicting that Sun will be able to capture even 5% of the blade market in the next 2 years, I only believe that this is a product that deserves some credit for being different and will surely impact how the blade market develops going forward.
I don't think Sun blade 8000 chassis is a conventional blade. The blade market mostly plays in the 2-socket region, so the the Sun blade 8000 is not really in competition to HP/IBM blades. I doubt Sun did well with that chassis, not many customers are deploying dedicated 4-socket blades, and the price premium you pay over traditional 2 socket blades makes it cost-prohibitive. The Sun blade 6000 is in the same field as HP/IBM blades, and it does have some inherent advantages over HP/Sun blade design, so customers will surely take a look at the Sun 6000 blade design before making a decision.
With regard to DIMM density, Opteron supports 8 DIMMS/socket today - so it's not unique to FBDIMM (In fact Sun's new AMD blades have 16 DIMM slots as well). With multi core CPUs, DIMM density is important more than ever. So even if Intel does dump FBD, 8 DIMMs/socket is guranteed to be supported. What's more important is for the system vendors is to make space for all those DIMM slots in their boards and supplement with the required power and cooling. HP's full height C-class blades has 12 DIMM slots. If the blade form factor is small, it just becomes harder to increase the DIMM density, and even harder to cool it. So while HP can easily come out with a 16DIMM blade in their 10U full height blades (provided they can cool it with their cooling mechanism), it's harder to do so on SuperMicro's 7U form factor. But, surely kudos goes to Sun for being amongst the first to support the highest configuration blades.
CPU density isn't everything
Typically, blade installations don't populate all the blade slots because thedatacenter can't handle so much power and cooling. What's more important are blade management, servicing etc. It's where IBM/HP has an edge over everyone else. Not many customers would trust anyone else, even DELL, let alone Supermicro on blades. Sun's new 10U blade offerings are definitely superior to this. Come on, only 8 DIMM slots for 4 socket quad core cpus. If I need 32G RAM, I have to pay 80% permium for 4G DIMMs over 2G DIMMS. I can buy a 2socket 32G blade a lot cheaper from Sun than from Supermicro.
I Can can go to 64G RAM with Sun's blade which allow me to run a lot more virtual machines.
What about the chassis design. IBM and HP have built an entire ecosystem with variety of offering in plug-in IO cards, switches, blade management modules and strong blade management framework. Sun is relying on PCI express 'express module'(EM) standard which is also at least as good an offering as IBM and HP, and the overall the sun blade chassis design is commendable, and supports IO hotplug.
Sun's x86 server management software seems to be well reviewed, and plays with IBM and HP's management tools.
As far as I know, IBM's blade chassis is OEM'd from Intel. 80% of the blade market belongs to IBM and HP, so they are experienced enough in this market. Not easy for a newcomer to crack - and there is little chance for Supermicro to break in here.
Worth a look
Someone commented about the B1600 blade chassis from Sun. Well, it didn'tsell well because it was plagued with heating/cooling problems and the blades were underpowered, so Sun abandoned that design.
One of the very basic basic benefit of blade is that the the overall energy consumption is lower compared to rack mount servers, "for the same amount of computing power". Basically what they do is to use common powersupply and fans shared by a bunch of servers. You get the added benefits of better cable management, integrated switches etc. When you consider the whole datacenter, you can pack more computing power in less space and with less electricity - this is very important for some customers. So even though you get only 10 2S servers in 10U space, the total power consumption would be lower compared to 10 1U rack mount servers. BTW, the HP C-class blades
have can have only 8 full hight blades in 10U space, and each full hight blade is actually less powerful than the new Sun blades ( have only 6 DIMMs per socket compared to 8 for Sun), of course you can use 16 half hight blades in HP, but each blade would be under-powered.
I see the new Sun blade is unique in many ways:
- You can use the highest performance cpus, compared to some blades which want to use the LV version of the cpus.
- They offer 8 DIMMS/socket, so you can actually buy a large memory config blade cheaper. For example, if you need 16G, you can use 16 1G DIMMS which will be cheaper that 8 2G DIMMS. And you can go to 32G by using 16 2G DIMMS, and the DIMM cost would be 50% cheaper than using 8 4G DIMMS. Not to mention, you can go upto 64G, something you can't do on HP/IBM/DELL blades.
- They use PCI express modules, which allow you to have unique I/O per blade. You can't do this on other blades which must use the same IO for each blade. Use of PCI express module also makes servicing much easier, since each module is at the back of the chassis. Compare what with other blades, where the IO modules reside on the blade itself. In addition, the PCI express modules are hot plug capable.
Overall, yes blades are more expensive than rack-mounts, yet the blade market is growing by leaps and bounds, because there are customers who value the density/power advantage of blades and don't mind the extra upfront cost.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- 20 Freescale staff on vanished Malaysia Airlines flight MH370
- Did Apple's iOS literally make you SICK? Try swallowing version 7.1
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked