The Ethernet switch market picked up a tiny bit as last year came to a close, according to various box counters, and the prognosticators at Infonetics Research and IDC were projecting that 2013 would see stronger growth as the move to 10 Gigabit Ethernet begins in earnest in the data center and companies start contemplating …
Cheap basic switches needed
To drive 10GbE adoption, cheap basic switches are needed - the 10GbE equivalent of the 8 port 1GbE switches that can be picked up for under £50. For small outfits, managed switches are unwanted extra expense. (How many full featured managed switches are used as dumb switches throughout IT.)
To kickstart 10GbE, 8 port switches should be under £1000 and the NICs should be under £100 (10GBASE-T) with the switch ports being able 10 handle 10GbE and 1GbE (autoswitching). If and when these prices become available then 10GbE will be used as a matter of course.
(There are far more small companies than large ones and for a company on one site with fewer than 50 employees (>>90%) managed switches are not needed.)
Re: Cheap basic switches needed
Forgeddaboutit. The Cisco/Juniper boys want you to have a mishmash of layer 2/3 managed overpriced overcomplicated crap with the web GUI disabled so you have to pay consultants all the time for even the most basic change and when it continually goes wrong .
Re: Cheap basic switches needed
Thats pretty much where I am at, wanting a sanely priced (i.e. about 20 quid \ 30 bucks a port) 8-12 port 10gbe switch. Double it for a managed switch.
It just seems that ethernet just isn't accelerating in speed as fast as I need compared to say processors \ hard drives etc (at least when factoring in cost). Perhaps I'm being unrealistic but I remember using 1gbe maybe 9-10 years ago and I was far from an early adopter, I'm sure it had been around at least a couple of years by then.
I would love for 40 or 100 gbe to be sanely priced but theres no chance, not when we are struggling to get a decent price on 10gbe.
It's all about the east-west
For the first time in a long time, there is starting to be a bit of buzz about the Ethernet switching market. Whatever your views on Ethernet as a technology (there are few other examples in IT of something as long-lived as Ethernet has been), we are starting to see some genuine innovation driving radical re-thinks of how Ethernet fits in a modern Enterprise.
It's definitely right to point out that the drive to 10 gig is not just about more bandwidth. Reduced latency is a massive benefit, especially for the transactional east-west traffic flows which we are increasingly seeing within data centre environments (due to application "fan out" from single user request to peer systems to pull in data from disparate sources relevant to that user request, amongst other trends). This change in traffic flows should be driving some redesign in data centre topologies; the tiered "top-of-rack" to "end-of-row" to "core" topology that some switch vendors continue to advocate (as it drives sales of the cash cow modular switch systems) stymies these kind of east-west traffic flows.
In the data centre, new tin is a by-product both of the drive to greater 10 gig densities and higher and a change in best-practice as far as data centre design goes. Fabric technologies such as Shortest Path Bridging (SPB) or FabricPath should be driving re-thinks in design, away from end-of-rack aggregation to meshes of interconnected, lower-cost top-of-rack systems that optimise the forwarding of layer-2 and layer-3 traffic between racks, rows and even data centres. This becomes even more relevant as the server guys push the network guys to support things like VXLAN - SPB in particular is suited to providing the scalable multicast environment that VXLAN requires; one of the many reasons why Avaya, Alcatel-Lucent, Huawei and others have gone down this route rather than the proprietary alternatives.
Moderately interesting times :)