Super Micro Computer will soon push the envelope on server density. In early mach, at the CeBIT 2009 trade show in Hannover, Germany. In early March, the whitebox server and motherboard maker will delivers a new four-in-one server dubbed the Twin2. The Twin2 servers will pack four half-width motherboards inside a 2U server …
Originally an OSS designed product..
This new Supermicro server looks very similar to a project that Open Source Systems (originally Open Source Storage) was working on before they went under. It was a brilliant idea, and I'm glad to see that someone was able to take their design mainstream (Somewhat). It looks like the team at OSS was ahead of their time. I know their product management team was very sharp and forward thinking (My friend Calvin worked there at the time). Of course 2+ years ago, there were no "Twin" style half width boards, so the most they could do was 2 full size motherboards in a 2U. But all of the other benefits of this system were present in the original OSS design. Dell also took the same idea for use by there Data Center Services group..Take a look at the link to the original design below, and then visit www.linksv.com and type "Open Source Systems" in the search bar to see the originators of this design. (Eren Niazi, Ivan Secoquian, Brian Rodriguez, Mark Rotzow, Jared Giles, etc.)
Original OSS Design:
"In early mach" - they may be fast, but not fast enuf.....
Where have all the editors gone?
Mr. Morgan, your editor has failed you miserably. Please see the first two paragraphs of your article for details.
Quacks like a duck...
So, despite being "not a blade server" this is a blade server with the blades mounted horizontally...
what's a factor 1000 between friends?
for 'gigaflop' read 'megaflop' throughout
nothing new or special
1U 19" Rackmount with 4 servers:
Save the planet?
Alternatively - Windows users - Save space, energy and staff by putting *NIX on more of your servers. Typically you can serve 2-4+ times more users with *NIX than Windows...
Little purple release tabs on the drives? Check
Hints at something similar being released by HP? Check
Wonder if HP are rebadging it?
lex has been doing this for ages
Similar designs using mini itx orm factors have been around for ages.Lex, travla, etc
will it blend?
play crysis at more than 2fps?
play raytraced quake3?
is it able to run a single outlook mailbox restore over mapi at more than 200kB/s?
sorry, just getting it out the system...
is 4in2 better than 2in1?
I have one of their 2in1 servers. Not a bad bit of kit, but I fail to see how a 4in2u server is an improvement over 2 x 2in1u servers.
Paris, cos I'm baffled too!
Not just for high performance boffins
Looks like a winner for those remote office setups. Up to four powerful servers in one chassis, powerful enough to run local email and database, possibly other servers virtualised, inside 2U. Should make an interesting option to having to put in a half-loaded blades chassis or several 1U rack servers.
Have to correct one misconception on blades switches, though. They can replace external rack switches as they have external ports available for other non-blade devices, or the blade chassis can use pass-through modules to connect to existing rack switches. So the idea that blades somehow ties you to having to have rack and blade switches is incorrect.
Exactly my thoughts. It's almost there, just without switch- or management modules. If the disks were on a shared bus with controllers that supported concurrent access it would make a sweet cluster in a 2U format.
lolol, you're kidding right? This is so far ahead of 4 crap mini-itx motherboards with VIA C7 on them bolted down inside a 1U case that's it's unreal. Just one Dual Core CPU is going to pee all over the 4 VIA C7, not to mention the ease of which a dead motherboard can be swapped out. You're having a laugh I'm afraid.
"it would make a sweet cluster in a 2U format."
until the shared PSU packs up!
AFAIK it's N+1 on the PSU with Hotswap.
Remember - All of this was an OSS design...
OSS was ahead of their time with just a fraction of R&D that the larger companies could put into it. OSS and their development team are wholeheartedly responsible for this design. They couldn't help it if Supermicro or Dell stole the idea, but please give the OSS team the props they deserve!
Eren Niazi, Ivan Secoquian, Brian Rogriguez, Jared GIles, Mark Rotzow, etc. They were the real ones behind the design.They were also the first company to include high efficiency power supplies, that were eventually adopted by Supermicro (Coldwatt) fot this design. Funny how people try to take credit for what others have done. The valley is a very unique place in that they will lie, cheat and steal in order to get what they want. Pathetic yes, but standard. Most people should feel a sort of moral wrong doing, but honestly, they are so far detached from being respectable people, that they do not care.
Original OSS Design:
Supermicro: cheap, thinly certified, sloppy QA
Let us know when a vendor that's capable of doing QA and willing to pay the fees to have their gear certified releases something like this. Until then, this is a giant step down in the being-able-to-sleep-nights sweepstakes.
Hot swappable motherboards - my arse...
Hot swappable as in - while 3 servers run, you can change the motherboard in the fourth,
Not hot swappable like disks, as in - each server has a N+1 motherboard redundancy, such that should a motherboard die, the hot spare can be invoked and the failed motherboard can be swapped out.
Furthermore, I fail to see why this is a selling point, other than to state that there are four independent motherboards in the chassis instead of 4 integrated on one, which would be asinine.
The most popular failures of cmponents on servers are disks and power supplies. Third - way down the list - comes memory DIMMS. Motherboards come 4th - they tend to fail relatively rarely, unless insufficiently cooled.
Paris, because she is insufficiently cool.