back to article Facebook Data Center: If it won't run ARM, what will it run?

In August, the rumor was that Facebook planned to pack its first custom-built data center with ARM servers, abandoning traditional x86 chips from the likes of Intel and AMD. The trouble was that the rumor arrived via a site calling itself SemiAccurate, and Facebook promptly told the world it wasn't accurate at all. But on …

COMMENTS

This topic is closed for new posts.
FAIL

Errr

Do Facebook run any software on these servers? Is there any chance at all that their bottlenecks are not because of CPU power but (maybe) an IO bottleneck, or maybe some other bottleneck related to distributed application design?

It still takes 9 months or so for a lady to make a baby even if you have 5000 of them working in parallel.

Just askin', like.

0
0
FAIL

what point are you making?

(a) I would not be at all surprised to find that facebook is being used somewhere as literally a textbook example of an application that parallelizes well. What % of facebook users do you ever see any information from on your page?

(b) regarding I/O bottleneck - the trick is to distribute I/O - network and disk - among the processors. This has been thought of.

0
0
FAIL

"What % of facebook users do you ever see any information from on your page?"

"What % of facebook users do you ever see any information from on your page?"

What has that got to do with the absence of serialization bottlenecks in or underneath the application? I don't see mail to anyone else but me in my Inbox, but if there is a bottleneck in the underlying file system I see its effects.

Are you really claiming that there are no serialization bottlenecks underlying the bits that the Facebook user sees? Do you (or they) have any actual evidence for that claim?

"the trick is to distribute I/O - network and disk - among the processors. This has been thought of."

See above, unless you can show otherwise from definitive sources.

0
0
Silver badge
Coat

Cooling

All they need are fanbois.

2
0
Silver badge

Don't get sandy bridge

Why sandy bridge for high density servers?

Taken across a data centre isn't it a massive waste of silicon for a per die that will probably never be used?

0
0
Silver badge

To correct myself

I meant a huge waste of silicon with a gpu per die.

2
0
Alert

as compared to the massive waste of silicon...

... implicit in being legacy compatible with pentium, 386, 286, 8086, 8080? At least the GPU will be powered down.

0
0

They run open source

No matter how huge they are, they can switch to any CPU without a single question of backward compatibility overnight.

They just need a reconcile. Facebook, for that reason is the nightmare scenario for closed source companies and Intel.

Funny is, there is one more company in that position. Apple. There is of course a lot of sse feature usage but not to the point of "stuck". That is how they could switch to Intel that easy, shaved down os and shipped for phone/tablet.

1
0
Anonymous Coward

err

You finding a lot of desktop chips in servers lately?

0
0
Silver badge
Coat

Queue IBM Sales Team with smug grins

Having sold a huge Mainframe to Facebook

Ok, I'm out of here.....

3
0
Anonymous Coward

"Queue"

Tell me, did it take you a lot of tries to type "queue" so that it came out without a squiggly red line under it?

0
0
Zot
Bronze badge

"If if.." ?

Smelling pistake!

0
0

This post has been deleted by its author

To store

1 Million TB of pokes

2 Million TB of song lyric status by the recently dumped

5 Million TB of 'likes' of pages created by bitter ex's

7 Million TB of farmville shit

8 Million TB of desperate chat convos sthat start with "hello".... "hello".... Offline.

1
0
This topic is closed for new posts.

Forums