7 posts • joined 9 Jul 2012
Isnt this old, the way back to 2011? Also they are after combining Tesla via PCIe x4 on a Seco carrier board + infiniband adapter on x4 pcie. What i question is, populating 4 of theese Teslas on a i7 system seems logical, and connect to other nodes via infiniband. You need lower number of infiniband adapter, cheaper switches.
100ns is impossible, sure u not mean 100ms?
Isnt this bigger??
The Article is jumping from one branch to another, making it too hard to follow the scattered information for thoose who are a bit unfamiliar of to theese kind of stuff.
Won't an Intel X540-AT2 do the trick?
Doesnt putting an 2 port 10Gb Intel X540-AT2 card to Xeon system for 120$ do the trick? It will boost to somewhere at least and keeping in mind the server prices, card is not that expensive i guess??
And Calxeda only states their EXC-1000 supports 10GE on fabric switch, doesnt it mean system still needs an interface? That also means there are some kind of interface and connectors for big box. So why didint they provide some numbers how their system perform well on 10GE with internal GE support??
Is is written as 1.000.000 reqs. of 16kb. So it is 16GB which is heavy for 1Gbit.
It is written that 1.000.000 of 16KB requests are made. So it is 16.000.000Kb or 16GB which is a lot for 1Gb.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip