As server farms grow and their workload changes, the design and structure of the networks that serve them must also change. End-of-row switching is increasingly giving way to top-of-rack switching, and tiered networks may need to be replaced – or perhaps augmented – by more mesh-like Ethernet fabrics. The increasing density of …
Perhaps I am behind the times or perhaps these guys are very forward looking, but I had imagined if you were talking about high densities of servers (vs high densities of cores in monolithic servers) per U I would have imagined you wouldn't have needed high capacity links. I would have imagined you would have just had dual-GigE per server and then you would need a backbone which was un-contended.
I don't know, perhaps a half-U machine needs 10Gb, but that is an awfully wide pipe, more used for bulk transfers than for fast transactions? As I say, I don't know, probably these days 2GB/sec isn't fast enough for a compact many-core transactional server.
Feel free to correct my ignorance.
it's the virtualisation!
A single server probably doesn't need 10Gbps but a physical server with many virtual servers might, then try using vMotion to shift a loaded virtual server to another box with more capacity whilst it is running with traffic "tromboning" thru the original host until the switches all update their FIBs. That will be the driver for faster links.
Was slightly disappointed to see no mention of Juniper's Q-fabric in the article which uses a CLOS non-blocking network in the core to allow full capactiy for any TOR switch to another and which is apparently in production with some of their customers now.
And the storage
The other half of the equation is FCoE and iSCSI. SAN alone can easily gobble up half a 10Gbps link with just a few dozen virtual servers.