Feeds

back to article Facebook's open hardware: Does it compute?

What happens if, as we saw at the launch of Facebook's Open Compute Project on Thursday, the design of servers and data centers is open sourced and completely "demystified"? If open source software is any guide, hardware infrastructure will get better and cheaper at a faster rate than it might otherwise. And someone is going to …

COMMENTS

This topic is closed for new posts.
Pint

RAIS

Redundant Array of Inexpensive Servers

I look forward to the royalties rolling in..

2
0
Thumb Down

Facebook isn't 'open'.. so? and Google is?

Yes, Facebook ain't 'open', but Google is even less so.

They're almost as bad as each other, except at least Facebook doesn't promote itself as a faux-open source mecca for gullible techies and Google fan girls.

It's amusing how Google promotes itself as 'open' at every opportunity - and pity the poor souls who actually believe this...

how sad.

1
2

This post has been deleted by its author

Can we demystify the network as well please?

I want to do the same with high density storage (like Backblaze, thanks chaps) and with the network. I don't want to pay huge sums of money to Cisco for switches ...

0
0
Silver badge
Linux

So where can us joe public buy these mobo's?

A few of these servers would really fit nicely where I work. We could kick those aging Sparc boxes out the door(they are 5yrs old).

Being practical, is someone were to package up these Mobo's into a package that didn't cost the earth could make themselves a nice lot of money. Well they would until the big players decide to go after the market themselves. To get rid of all the crap that is not needed for a server is a great idea but traditionally 2 socked Mobo's (eg from SuperMicro) are hellishly expensive. So much so, that SME's give up and buy Off the shelf 'so called server's from the likes of Dell, HP etc.

Then there is the question of the Microsoft Tax. At least by building their own Facebook don't have to pay MS a dime. It would be nice if use mere plebs could escale that as well.

0
0
Badgers

Project time?

Something for El Reg to whip together, inbetween spaceships?

2
0
Thumb Up

Nice title

It is interesting that they can get the motherboard designed and built cheaper than the can buy a standard one from Dell or HP, they must be ordering a hell of a lot of them for that to work though.

Now, you motherboard manufacturers take note ! I really really need a Micro ITX board with 6 to 8 SATA 2 interaces for my storage servers and NO I DO NOT WANT A FAN ON THE PROCESSOR :)

0
0
Anonymous Coward

That Intel mobo

I can only see 6 DIMM slots. Where are the other 12?

1
0
Silver badge

title

I was wondering the same...

0
0
Silver badge

I think

That the extra sockets haven't been soldered on the Intel board - they would live between the ones shown.

1
0
tpm
(Written by Reg staff)

Re: That Intel mobo

Good question

But the Intel mobo spec sheet says says it has 18 DIMMs per board:

http://opencompute.org/specs/Open_Compute_Project_Intel_Motherboard_v1.0.pdf

The picture doesn't match the spec, obviously.

0
0

wheres the power supply?

The article is a bit unclear, is there one PS per rack or one per motherboard?

0
0

PSU

There's a custom made PSU on each chassis facing the front side.

0
0
Anonymous Coward

Didn't Google do this five or so years ago?

Maybe I've got a memory problem, but I thought Google had been doing bare bones "white box" servers for years? How's it working out for them?

0
0
Happy

Google

Google never disclose anything about their information servers...

0
0
Silver badge

Chip density doesn't matter?

Looking at the pics the thing that strikes me is the low chip density per server. I am not an engineer but the approach does not seem to me to be anywhere near the needs of a very high-scale data centre as it is not commodified enough. I would expect much higher chip density on motherboards and very little local disk storage allowing scale in three dimensions: cores, RAM and disk like the HPC stuff. Making the servers higher for better cooling means that the chips are running too hot. Better TDP required. And, of course, air cooling isn't that good. How about keeping the servers in water?

0
0
Boffin

1U is too small.

1.5U allows bigger fans (more efficient) increased contained air volumes (more effective) taller heatsinks (need lower air flow rates).

More room around the other components provides more thermal isolation, so lower overall temperatures.

If the cubic feet are cheaper than the more intensive cooling you would otherwise need, you win.

0
0
Anonymous Coward

Chip Density

If you look carefully, it's a standard motherboard with components removed. This is because motherboard design is rather expensive, especially if you're deviating a lot from the reference design by removing things that everyone will want. So chip density matters, but not enough to justify the cost of a truly custom motherboard.

1
0
Happy

48Volt input a winner

Most commodity UPS convert the mains input (230v/115V/110V) to +24 or +48V to charge the lead-acid batteries that are used for backup. When the juice fails, they then have to invert it back up to the source AC voltage to power the protected equipment. The servers then convert it back down to +3.3,+5,+12,-5V for the actual electronics.

That's a waste of cost, and power, particularly as the UPS output is generally controlled to a fairly tight tolerance.

Instead, the server accepts the raw 48V from the battery (in parallel with the mains input), and converts it directly to low-voltage power. It also probably has a reasonable input voltage latitude, so the output from the batteries does not need to be stabilised.

Cool!

0
0

48vdc is a teleco standard

and it's been around for at least 20 yrs. Networking equipment has had a 48vdc option forever and servers with 48vdc power supplies were around 15yrs ago.

The real surprise it that they have both a 277 (interesting number, it's a 3-phase industrial spec) AND a 48vdc. Most places just feed everything through 48vdc as there is no switching time when the AC power fails...

0
0
Bronze badge

Open design.

The original IBM PC was pretty much an open design, that anyone could copy. (Apart from having to write a functionally equivalent BIOS).

The BeagleBoard is an open Arm based design.

This isn't anything new, although it is news worthy.

0
0
FAIL

Does it compute? No, it doesn't

They say in the document on chassis and rack triplet that the chassis has a height of 1.5U and the rack is 42U tall. One column in this rack triplet can contain 30 such servers for a total of 90 servers per rack triplet.

30 x 1.5U = 45U

Even on the mechanical drawings the height of the side pane is closer to 50U.

0
0
Anonymous Coward

Most companies are not that willing to make changes

After trying, unsuccessfuly, for several years to get my then employer to let us design our own boxes with commodity parts top to bottom with virtualisation on top and all the goodies I'm not so sure most companies will in fact do this.

There is so much fear and inertia around 'required' support from Netapp or support from Dell/Sunoracle/HP that just convincing management that not only will it work it will actually lower costs and increase reliability I think most companies won't go for it.

Hopefully that's not the case.

0
0

What about support

Most people agree the cost of the box is tiny compared to the amount spent running the thing. If you've got your own datacentre and dedicated full time staff it's fine, but most companies can't afford to pay someone full time to monitor, manage and meddle with those boxes to keep them running.

Vendors with 4 hour SLAs make much more sense for companies that aren't all about the datacentre.

0
0
Coat

Sometimes the cogs turn slowly

I've finally woken up to the significance of 277VAC, this is in the land where 110 rules. Limply. Over here where we have full fat 240VAC it just read like a spec fudge, pushing a 240VAC +/- margin supply to the upper bound.

Coat please, seems I'm past it.

0
0

277 is standard in the US

it's industrial 3-phase power, not usually seen in residential apps.

0
0

Voltage...

So are they still running mains to the individual servers? That's how it reads to me. What's the logic in not running DC from a pair of PSUs in each rack? Two big transformers must surely run out cheaper than 30 small ones, plus that's one less heat generator in each box...

0
0
This topic is closed for new posts.