Feeds

back to article Sun unfurls four-core Xeon-fueled blade memory hole

When not patting itself on the back for selling servers, Sun Microsystems found time today to announce a new blade system. In a statement, Sun boasted, "Since re-entering the blades market in mid-2006, Sun tied for #4 in blade server market share for factory revenue (Q3CY07), released 29 new blade and supporting products, and …

COMMENTS

This topic is closed for new posts.
Bronze badge

Excess Memory ? Me think not

"The server has a ridiculous 32 FB-DIMM slots"

I think this is a darn good amount of DIMM slots. The problem with blade servers, is that they are so darn tightly packed that manufacturers tend to skimp on memory slots to keep space down to a minimum.

0
0
Flame

Mass market, hot commodity

with the higher-end CPUs and that many populated FB-DIMM slots, only one blade would be required to heat an average house. think of the grid possibilities.

forget one TV per family, THIS is the future.

0
0

I/O

16 cores, 256 GB, what's the I/O? Rack 'em up and throw VMware/Xen/Solaris Containers on them all day long.

0
0
Silver badge
Happy

Spinning maybe, but not so bad.

Hats off to Sun's salesgrunts. To give them their due, just spinning and maintaining their installed base has been quite an achievement given the quality of what they had to offer. I'm sure HP, IBM, Dell, Uncle Tom Cobley and all would have been trying hard to switch those old SPARC customers, so just treading water is quite an achievement. Of course, catching up with HP, IBM and Dell was well out of the question.

I'm not sure the new x8450 blade is the product to take them to the top of the heap either. Whilst it is a big advance on their previous offerings, it's still only ten 4-socket blades in 19U, so that's only twenty blades in a 42U rack. HP's four-socket BL680c G5 blade uses the same quad Xeons, and not only gets eight blades in 10U (so better than half as many more in a 42U rack), but also has the advantages of Virtual Connect and being able to share the same chassis as all the other HP blades. HP also has tape blades (SB920c and 448c), a disk blade (SB40c), and a NAS blade (SB600c All-in-One). Sun is still stuck with its two blades lines needing different chassis (b6000 or b8000), no tape blade, no disk blade, and no NAS blade. Sun better concentrate on IBM blade customers.

0
1
Linux

4 socket blade

The Sun blade has twice the memory density and much more I/O compared to HP

BL680c G5 blade, so the density comparison is not all that fair.

Regarding blade density:

http://www.internetnews.com/hardware/article.php/3729116

<snip>

Jed Scaramella, senior research analyst for servers at IDC, said Sun's blades are a little larger than its competitors at IBM, HP and Dell, but the risk paid off. "When people first introduced blades, the value proposition was all about density," he said. "IDC has found that floor density is not an issue. It comes up but not as much as power and cooling does."

Sun doesn't have half-height blades such as IBM and HP and isn't hurt by it, he said. "Sun's blades don't have the limitations the other blade chassis have when you start bumping into the ceiling on memory and I/O. You can only do so much in a blade chassis because of the memory," he said.

"It's really a forward-looking blade," Scaramella continued. "They are designing it not for people's needs last year or this year but maybe next year."

<snip>

VirtualConnect is no doubt an elegant solution for HP blade installations. There are nice external solutions available that accomplish pretty much the same job (Cassatt and Scalent Systems) within a large data-center filled with cross vendor equipments. HP resells Scalent solution for large solutions. Going forward PCI express IO virtualization will provide an open solution, that Sun might be able to leverage for their blade solutions.

Sun's share of the 4-socket server blades now stands at 15% in contrast to their overall blade share of only 4%. It's good penetration considering how insignificant Sun's x86 business is.

0
0
Unhappy

Sun's 800P chassis

To Matt Bryant, Sun has a smaller chassis called the 8000P which fits 30 big blades into a regular rack. So using these new Intel 4 processor 16 core blades, that would mean 480 cores per rack. Not bad. See http://www.sun.com/servers/blades/8000pchassis/

0
0
Silver badge
Pirate

RE: Kevin Hutchinson

Yes, but to make the 8000P Sun had to hack off the redundant PSUs off the top of the 8000, and only put in two IO modules, so you can't have redundant LAN and redundant SAN. Not much point having loads of blades if you can't power them and deploy them in a redundant manner, and can't get any LAN or SAN bandwidth! That's even assuming the 8000P can power ten quad-quad Xeon blades - if you don't need the extra PSUs in the 8000 why are they there? Looks to me the 8000P is just more Sun desperation to try and make their fat chassis look more competitive, but in making it thinner they've made it totally unresilient and limited, especially when you consider that the c7000 offers the same eight IO switch slots and power redundancy with the quad-quad blades as it does with the dualies in the same rack space.

0
1

RE: Matt

Matt Bryant,

The least you can do when beating your HP drum is to stop spreading incorrect information. Please read carefully,because you seem to have little knowledge about the Sun blades and feel free to dispute as always:

"to make the 8000P Sun had to hack off the redundant PSUs off the top of the 8000"

Yes, 8000 series is meant for enterprise deployment with N+N redundancy. 8000 P is geared towards high performance denser deployment that does not need full N+N power redundancy. 8000 P is N+1 (3+1) redundant, but with enough power for 10 fully populated blades.

"and only put in two IO modules, so you can't have redundant LAN and redundant SAN"

This is incorrect. The only I/O the 8000 P takes aways is the ability to plug in the express modules (Each blade in 8000 series can have up to two unique PCI express modules (each typically two ports) (Dream about this capability in the HP blades case), the 8000 P series only takes away this capability.) Otherwise both 8000 and 8000 P provides for 4 (that is FOUR) Network Express Modules. Each NEM gives two ports for EACH blade. So yes, you can have 4 LAN ports AND 4 SAN ports for each blade even in 8000 P (or mix and match).

"Not much point having loads of blades if you can't power them and deploy them in a redundant manner, and can't get any LAN or SAN bandwidth!"

As explained earlier, there is still a lot more bandwidth per blade in the 8000 P series blades AND redundancy (compared to HP).

"That's even assuming the 8000P can power ten quad-quad Xeon blade"

There is enough power for the highest bin Opteron and Xeon based 10 fully populated blades.

"if you don't need the extra PSUs in the 8000 why are they there?"

Of course, not every application (e.g. HPC grids) need N+N redundancy. Even for rack servers there are options to select the number of Power supplies and the customer chooses them based on their requirements.

"Looks to me the 8000P is just more Sun desperation to try and make their fat chassis look more competitive, but in making it thinner they've made it totally unresilient and limited,"

FUD again. Not every customer needs the full power of 8000 series.8000 P is an option to have a denser deployment of the same blades when full N+N power redundancy and unique I/O per blade is not needed.

"c7000 offers the same eight IO switch slots and power redundancy with the quad-quad blades as it does with the dualies in the same rack space."

The HP 4-socket x86 blades are still no match against the Sun 4-socket blades in terms of memory capacity AND I/O bandwidth. The HP chassis is overly complex in terms of how components need to be plugged in and administered.

It boils down to how each company views the 4-socket blade market. You as an HP fanboy will surely beat the HP drum and pooh pooh Sun to match your agenda. It's perfectly logical to think that 4-socket blades are typically a small fraction of the overall blade market, and there is nothing wrong in consolidating the 4-socket blades in a chassis which can match more closely to the memory and IO capability of a 4-socket server,specially when the management is typically the same. Does HP's 4-socket rackmount servers have the same rackspace, memory and I/O capacities as their 2-socket ones ? Is that the case with HP 4-socket blades too ?

0
0
Silver badge
Happy

RE: Fazzi Auro

Gonna have to cry "male-bovine-manure" on that one! As stated in Sun's own 8000p datasheet from the Sun.com webby: http://www.sun.com/servers/blades/8000pchassis/datasheet.pdf

"I/O modules

PCI Express (PCIe) Network Express Modules (NEMs) (up to two per chassis or six per rack)"

Don't tell me, it's all a plant! I hacked the Sun website and planted the whole thing! Right.... So, if you can only have two NEMs, that means to have LAN and SAN - which are mutually exclusive modules - you can only have one of each, and therefore cannot have proper redundancy because if the one NEM fails you lose all your LAN or all your SAN connections. Unlike the HP c7000 chassis where all eight switch slots are available.

"....Dream about this capability in the HP blades case...." Yes, I think you spend a lot of time in dreamland. HP has a full portfolio of blade products and a far superior solution to Sun's. If that wasn't true than Sun would be the number one blade vendor and HP would be the struggling laggard. But then HP is the number one blade vendor.....

"...The HP chassis is overly complex in terms of how components need to be plugged in and administered...." Yes, adding redundancy and resilience does tend to make things complicated, but it doesn't seem to have stopped the largest slice of blade customers from buying HP blades. Maybe they just have an easier time dealing with "complex" equipment than you.

"....specially when the management is typically the same..." Well, actually Sun don't have an integrated management story. It's a different management tool for each line, and often two or more tools per product. HP has a tried and tested tool in iLO2, that plugs straight into SIM, that plugs into OpenView. HP have designed their products from the edge to the core of the datacenter to form an integrated, enterprise-wide management piece, which Sun can only dream of.

0
1

Matt

"Not much point having loads of blades if you can't power them and deploy them in a redundant manner,"

is that why HP Blades only have single power connections to each Blade, and single I/O connections on the BL640c and BL465c blades ?

0
0

RE: Matt

Matt, yes you are right about having only 2 NEM on 8000 P chassis, but I don't see how you can claim that 2 ports on 1 NEM does not provide redundancy. For one thing I have seen any I/O module failure in extremely rare cases, and even when they blade, the server crashes most of the time. The most obvious failure points in any network are what comes between the two server ports, i.e. the cables, switch ports and the switches etc. and I claim that having redundant connections through the two ports does not impose any practical restrictions. On the other hand, it's much easier to service the practically rare case when a NEM really fails, because it's only a hot-plug operation.

"If that wasn't true than Sun would be the number one blade vendor and HP would be the struggling laggard. But then HP is the number one blade vendor....."

The real reason HP is the number one blade vendor is because HP has a superior solution compared to the other upstart IBM in the blade segment. That compounds with other reasons like having the strongest x86 clout, complete product portfolio and software solutions etc. The sheer market position does not make their product superior to every other vendor in every aspect that's relevant.

"Yes, adding redundancy and resilience does tend to make things complicated,..."

Wow, what an argument. The most elegant design is always the simplest one.

Anyway, your original argument was about Sun not having a 4-socket blade in the same 6000 class chassis, you didn't respond to my question why HP's 4-socket blades don't have better I/O and memory capacity as their rackmounts do. I claimed that Sun has intentionally positioned their commercial 4-socket blades into a bigger 8000 class chassis that can deal with the requirements of a properly equipped 4-socket server as far as I/O and memory capacity goes. It's likely Sun's position of market segregation that they don't productize their ultra dense 10-U x6420 blade which can hold 4 sockets and 32 DIMMs for the commercial market.

0
0
Silver badge
Happy

RE: Fazzi Auro

"....but I don't see how you can claim that 2 ports on 1 NEM does not provide redundancy....." You obviously do not work in high-availability environments, or even remotely administered environments. If you have a single LAN NEM and it fails, you lose LAN access to the blades until you can get someone out to replace it, hotswap or otherwise. If it is a remote solution at a branch office then that could be hours, which removes any chance of a customer wanting the 8000p for anything other than edge solutions rather than the real datacenter tasks. As a simple example, go look at a Sun cluster configuration - there should be at least two independent LAN and SAN/SCSI paths for each node.

"....The real reason HP is the number one blade vendor is because HP has a superior solution compared to the other upstart IBM in the blade segment...." Erm.... it's also superior to the Sun solution, so superior that (in my opinion) Dell have cloned it. And HP's "clout" didn't appear overnight, it has been built on delivering systems and solutions that customers rate as a better choice. But I'm quite amused at the description of the two leading x86 vendors as "upstarts"!

"....The most elegant design is always the simplest one...." Ignoring the fact that the Sun blades are about as elegant as frozen cowpats, real-world computing has F-all to do with elegance and a lot to do with practical, reliable solutions like the HP c7000. HP have complex but reliable and resilient solutions that perform very well, and with a full toolset of software that makes management easy, hence their popularity.

"....It's likely Sun's position of market segregation that they don't productize their ultra dense 10-U x6420 blade which can hold 4 sockets and 32 DIMMs for the commercial market..." If I'm translating that correctly, you're saying that not being able to fit the Sun dualies and quads in the same chassis was an intentional Sun ploy to steal a massive advantage? What advantage? Economies of scale - nope! Commonality of components leading to smaller stock inventories - nope! Greater re-use of chassis to suit changing business requirements - nope! And the reason that Sun can't make the speciliased x6420 blade into a commercial offering is because it (a) they can't get enough Barcelona chips and (b) the ones they do have are faulty and therefore run below they're advertised speed, something that would not sit well with commercial customers. And have you seen the TACC Ranger system that uses the x6420? Every third rack is a cooler rack. So any arguments on rack density have to be reduced by a third. Try again!

0
1

RE: Matt

"You obviously do not work in high-availability environments, or even remotely administered environments. If you have a single LAN NEM and it fails, you lose LAN access to the blades.."

On the contrary I am certain you don't seem to understand the failure points in a network. There are two LAN ports in each NEM, and the NEM is not the likely component to fail. The two ports on the NEM can be configured as redundant network and if one 'link' get's clogged, the other port can provide redundancy.

"Erm.... it's also superior to the Sun solution, so superior that (in my opinion) Dell have cloned .."

HP fanboy speaking > /dev/null

So superior is the blade design that they can't put in enough memory and I/O for a 4-socket server ? Even 2-socket full heights don't have enough DIMM slots !!

Need half height toys for running edge apps ? - Choose HP blade - perfect !!

Feel free to ignore to yor advantage.

"If I'm translating that correctly, you're saying that not being able to fit the Sun dualies and quads in the same chassis was an intentional Sun ploy to steal a massive advantage?"

Read correctly, the blades in constellation chassis and 6000 chassis are inter-changeable.

"And have you seen the TACC Ranger system that uses the x6420? Every third rack is a cooler rack. So any arguments on rack density have to be reduced by a third. Try again!"

Now are you going to tell me HP's blade cooling would take the heat out and replace with cold shower ? Of course HP blades can't even dream of getting close to the density of constellation solution - calculate for yourself and you would know. (forget the cabling and latency) Oh yeah, I know, HP's magical 'active cooling' exhumes all the heat from the blade and then sends out chilled air around to keep the datacenter cool- isn't it ?

0
0
Silver badge
Happy

RE: Fazzi Auro

"....the NEM is not the likely component to fail...." Which is a shining example of how I know you don't work with high-availability gear, where you don't just assume something won't break, you take precautions via hardware redundancy to make sure that if you do get that 1-in-a-million occurence then your solution doesn't just become a very expensive heater. I don't bother lookign up the MTBF for LAN cards nowadays, it's very high, but I still put at least two in my servers to ensure that if one dies (and Murphy's Law dictates the likelyhood increases with the latest of the hour, it being a weekend, and your CIO being heavily involved in delivering the project!) I don't lose access to my server. If the LAN NEM in your 8000p chassis fails - whch would take out both ports - and you have a SAN module in the other slot, then your servers just became said expensive heaters.

"....HP fanboy speaking > /dev/null...." Having seen your tenuous grasp on enterprise computing, I'm guessing you copied that from a more knowledgeable aquaintance. As usual, your "rebuttals" are just insults, and carry no technical weight. Try thinking up a counter-argument for a change.

"....So superior is the blade design that they can't put in enough memory and I/O for a 4-socket server ? Even 2-socket full heights don't have enough DIMM slots !!...." Strangely, the market doesn't seem to think so, and I've yet to run into a situation with HP blades where memory size was an issue. I take it "It's got lots of memory" is going to be the Sun feature sell on this then? Yeah, that'll work....

"....Read correctly, the blades in constellation chassis and 6000 chassis are inter-changeable...." If by "constellation" (Sunshiner marketing codename?) you mean the 8000 and 8000p then you're shovelling that male bovine manure again - the quad-quad x8450 blades won't fit into the 6000 chassis. Mind you I thought "constellation" was the codename for the 6000 chassis, so your statement is almost as ludicrous as the rest of your reply.

"....Of course HP blades can't even dream of getting close to the density of constellation solution...." Let's see - the 6000 can have ten dual-socket blades in 10U so forty in a 2m rack, whilst the c7000 can have sixteen in 10U so .... Yeah, that Sunshine maths is a real winner! I really hope you don't work in accounting!

0
1

RE: Matt

"I don't bother lookign up the MTBF for LAN cards nowadays, it's very high, but I still put at least two in my servers to ensure that if one dies (and Murphy's Law dictates the likelyhood increases with the latest of the hour, it being a weekend, and your CIO being heavily involved in delivering the project!) I don't lose access to my server. If the LAN NEM in your 8000p chassis fails - whch would take out both ports - and you have a SAN module in the other slot, then your servers just became said expensive heaters."

Yeah, may be you need some education about the MTBF numbers of components when thinking about redundancy. And you probably also need to learn the effect of a component failure on the running system. Because, when a LAN card fails, you are pretty much assured that the OS is likely not in a good state to run your applications, whether you have a redundant network or not. So you have a redundant blade out there already, you surely thought about that, didn't you ?

"Having seen your tenuous grasp on enterprise computing, I'm guessing you copied that from a more knowledgeable aquaintance. As usual, your "rebuttals" are just insults, and carry no technical weight. Try thinking up a counter-argument for a change."

Yeah, when you speak in that language without quantifying your arguments, you are talking to a deaf ear - no insult there. What we are seeing from you end are not point by point counter arguments, but blind love for HP.

"Strangely, the market doesn't seem to think so, and I've yet to run into a situation with HP blades where memory size was an issue. I take it "It's got lots of memory" is going to be the Sun feature sell on this then? Yeah, that'll work...."

So may be you can give feedback to HP to cut down on the DIMM slots on DL585/DL580 as well, because YOU never needed that in your work. May be it's high time you also try to understand economics of memory in servers, and how server virtualization trend is fueling up the need for more memory. Oh yeah, why did HP need to put 24 DIMM slots on their 4-socket Itanium blade and make it double wide ? They could surely make a single width blade with 4 itanics and 16 DIMM slots and stuff in more blades there !!

"If by "constellation" (Sunshiner marketing codename?) you mean the 8000 and 8000p then you're shovelling that male bovine manure again - the quad-quad x8450 blades won't fit into the 6000 chassis. Mind you I thought "constellation" was the codename for the 6000 chassis, so your statement is almost as ludicrous as the rest of your reply."

That's the reason you need to shut your mouth before speaking up and try to at-least do some basic research before making a counter-argument. You have become so arrogant that you don't find the need to familiarize yourself before talking on a subject. So you can comment on something because you 'thought' something, anyway, I don't expect you to do any homework based on your history of a staunch and arrogant sun basher, and a flamboyant HP fanboy. Short story, Constellation is not Sun blade 6000. It's a HPC rack that holds 4 rows of 6000 series (10U) blades, each row holding 12 blades. So, in one rack you can put 4x12=48 blades.The blades you put are the same blades you can put in 6000 chassis. For TACC, the blade is called x6420 (The details are on the TACC website). This is a 10U blade that can hold 4 AMD sockets and 32 DIMM slots. There is no switch on the rack each blade is connected directly with the giant magnum IB switch with a special 3-1 IB splitter cable for minimizing cable clutter. Since there is no intermediate switch, node to node latency is the minimum possible for an IB network.

TACC uses pre-release barcelona chips on x6420 . So expect to see the same blades for 6000 chassis once fixed barcelona chips are out. And of course, there is no reason to believe why similar blades with Xeon should not be available shortly thereafter.

So yes, please re-do the accounting. How many 4-socket bladed can HP fit in a rack ? How many total IB switches are needed ? Do the HP fanboy math. Don't show me the core count with the half hight toys, the solution need 4-socket ones here.

1
0
This topic is closed for new posts.