back to article Soz, switch-fondlers: Doesn't look like 2013 is 10Gb Ethernet's year

It is becoming increasingly unlikely that 2013 will be the year that sees widespread adoption of 10 gigabit Ethernet. Of course we'll be told it will be, just as we have been told for years that wholesale shift is right on the horizon. The reason? It's not a question of technological capability – the technology for 10GbE has …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Cisco has competition

    Huawei's list pricing is _substantially_ lower than comparable Cisco kit (ie, with the same knobs on, not the stripped out "LanBase" or "IPbase" shite - although it's below cisco pricing on those too) and they usually discount significantly below that (*).

    Of course, one also has to be aware that "list pricing" of Cisco kit varies wildly depending who you talk to. It seems a lot of resellers pump the numbers up high so they can offer "amazing 85% discounts" (I'm looking at you, BTInet) which don't match other resellers' standard pricing.

    (*) How low? Ask them and prepare to be very pleasantly surprised.

    No, I'm not a Huawei employee, just someone who's been increasingly peeved by Cisco's insane pricing structures and mediocre performance.

    1. Anonymous Coward
      Anonymous Coward

      Re: Cisco has competition

      Agree and no doubt Cisco will ship you a lovely switch rammed full of 10gb ports that has a backplane that get crippled soon as you push many of those to anywhere near the limits.

      Still people love the name, makes them feel safe and cosy.

      1. Trevor_Pott Gold badge

        Re: Cisco has competition

        From experience, you can *flatten* a Supermicro 24-port 10GbE switch, flooding each port with traffic and the damned thing doesn't blink. (Review on that, and a Dell 10GbE switch coming *very* soon.) I know I'll catch hell from a bunch of dark-age scratching-shit-on-walls-with-sticks types, but...

        ...fuck Cisco.

        Maybe the make great core switches for people who need to move around terabits at internet cores. I wouldn't know; I don't play there. But damned if I can see a use for them in my datacenter. Give me Dell or Supermicro any day. That's before we even get into arguments about Cisco switches and their world-endingly shitty multicast performance!

        Bring on openflow, ladies and gentlemen. It's time to relegate the proprietary switching solutions to the niche they belong to. Cisco may built "better" gear - for specific values of "better - but most people don't need "0 jitter, ultra-low latency, blah, blah, blah." And if they did, they'd buy Arista anyways. Most people just need cheap, bulk throughput.

        For that, you need people selling solid non-blocking switches. Not Cisco's cruft. Let the upvotes fly, folks! You know you want to. Your CCNA training demands it!

  2. Robert E A Harvey

    Well Quite

    >the bulk of businesses just don't need 10x the bandwidth

    >and aren't willing to pay 3x the cost

    It has been a long time since hardware purchases have been anything other than a distress purchase!

    1. Anonymous Coward
      Anonymous Coward

      Re: Well Quite

      10GbE. Ha.

      I still have Fast Ethernet installed with a (mostly?!) gig backbone because I have absolutely no budget whatsoever for equipment, and I am only employed because I am demonstratably cheaper than outsourcing is.

      Suffice to say that I do not see much chance of deploying 10GbE in the near future.

  3. Steve Davies 3 Silver badge

    You could say

    just someone who's been increasingly peeved by <insert company of choice here> insane pricing structures and mediocre performance.

    Aren't they all the same? There are many companies where even their own sales droids can't understand the price list.

    10Gb represents a significant investment for many companies over 1Gb infrastructure. In these financially constrained times upgrading thenr network to use 10Gb is just a step too far especially when most 1Gb lans are not being saturated due to shitty Nic's etc that many manufacturers stuff into their bits of kit.

  4. Anonymous Coward
    Anonymous Coward

    It won't happen

    because they are greedy bastards with stupidly expensive pricing.

  5. Ian Michael Gumby
    Boffin

    Trevor, you need to move in to a first world country away from Canada...

    Sorry, but when I read Trevor's earlier piece about building out a master test lab, I was excited. Unfortunately that piece failed a horrible death along with this one.

    Granted that there are a relatively few reasons to spend $$$ on 10GBe kit. However those reasons tend to be in the High Performance Computing area along with Big Data analytics.

    Cisco has been slow on the adoption, and while someone has been pushing a certain Chinese kit maker's name, there are other players in this space. BladeNetworks which I believe is now owned by IBM, and Arista. Arista makes a 10GBe ToR switch that is quite reasonable in terms of price.

    While Trevor focuses on building his test lab kit he should have notices that IBM, HP and others, including Super Micro are putting 10GBe on the motherboard.

    With that, the bottleneck is the ToR and the company's infrastructure.

    Were Trevor to have a job in a *real* country, he would have access to this kit. All he would have to do is to sneak across the border.... :-P

    -Just Saying!!!

    1. Trevor_Pott Gold badge

      Re: Trevor, you need to move in to a first world country away from Canada...

      I'll stay where the cost of equipment is high in order to have real health care and an unemployment rate that doesn't need 15 layers of bureaucracy to massage it into looking 1/20th the size it really is.

      Cheers.

    2. Trevor_Pott Gold badge

      Re: Trevor, you need to move in to a first world country away from Canada...

      Also - and I double checked - there was a whole damned paragraph about "10GbE on the motherboard." In fact, it was followed by a discussion about the difference between LOM and switch PHY pricing and availability.

      Dude, les Q?

      1. Ian Michael Gumby
        Boffin

        Re: Trevor, you need to move in to a first world country away from Canada...

        Sorry Trev, but a ToR is ~10K.

        If you're building out a rack of machines for a Hadoop cluster... that's not a bad price.

        If you're talking about cards to upgrade current kit, Solarflare makes one that's reasonably priced.

        My guess is that you're stuck on Cisco pricing.

  6. Rampant Spaniel

    What are your thoughts re how long before this gets down to home office level pricing?

    A decent usb 3.0 drive can max out a standard gig e link and a reasonable raid (or a ssd) array can saturate 4 bonded gig e links.

    Working with photos isn't too bad but with video starting to benefit from gpu encoding (cuda and open cl) and an expanding office. I am starting to get to the point where I need to have a plan for the next 3-5 years and make a choice between a 'half hearted' move to 802.3ad or a more costly but longer lived jump to 10g.

    I would love to not have to pay 10k for a switch, just curious what peoples thoughts are about how quickly it will come down to the 30-50$ a port range? Maybe 1500 for a 24 port? Thanks in advance!

    1. Trevor_Pott Gold badge

      That depends on a number of things. There is a war brewing between several vendors on 10GbE pricing. It could happen tomorrow. It likely won't happen until Haswell drops. Expect Haswell to ship with 10GbE on the desktop and 1Gbit LOMs on servers to become nonexsistant.

      40GbE switching silicon is coming down to the point where you can make serious margin off of it, and QSFP cables are dropping to reasonable rates as well. Switching manufacturers are going to be forced to drop 10GbE prices around the Haswell timeframe – probably with no more than a 6 month lag – if they don't want shops to bulk ignore 10GbE and move directly to 40GbE.

      That would be rather disastrous for switch manufacturers, and yields on 100GbE PHYs are still dismal; if everyone moves to 40GbE switch ports, then demand for 100GbE trunking interconnects will skyrocket. They won't be able to meet demand, and can only jack up the price so high before seeing massive pushback.

      Until 28nm fab capacity becomes a lot more available – and that is at least 3 years out – then we can't crank out 100GbE switching at the rate we're doing 40GbE today. It just won't be economically feasible.

      That means that we need to migrate people to 10GbE sever --> switch interconnects in a big way, leaving 40GbE for top of rack --> core and 100GbE for "folks with more money than sense."

      Unfortunately, nobody wants to the first to take a bath on 10GbE margins. The prices are somewhat stable right now, and demand for 10GbE is growing at a fantastic rate. Eventually, someone will cave – my guess is Supermicro – and take the margin hit to drive to cost of 10GbE into the floor. Dell and other vendors won't have any choice but to follow. Intel will drop the silicon prices down to "pittance" levels and Dlink/Netgear/etc will block-shift to 10GbE overnight.

      Everyone is leery of 10GbE prices crashing, but they are *far* more afraid of someone dropping 40GbE. The cost of 10GbE silicon is so low right now that they can afford a price war on 10GbE. A price war on 40GbE would cost the entire industry their margins for the next decade.

      So…2014 is my guess. I expect that the price war will hit end of this year, beginning of next. By June 2014, we should be able to go out and buy $750 24-port 10Gbase-T Dlink switches. A year after that, we should be seeing 48-port 10Gbase-T switches drop below $1000.

      1. Rampant Spaniel

        Thank you! Great reply.

  7. Anonymous Coward
    Anonymous Coward

    It's been many years since I saw 1G in anywhere but to the Server, and saying that, the last year, I've seen more and more 10G to Server deployments.

    The fact is that Cisco and Juniper *are* peddling commodity (Broadcom trident chipset). Just look at the Nexus 3000 from Cisco, and the QFX from Juniper. They're quite cheap when it comes to 10G pricing, and port density (48/64 in a 1U chassis, depending on how you count). They're quite cheap also.

    The fact is that 1G is long dead, and in real datacenters, copper is dead outside of racks.

    Even at the core (or inter-DC), 100G is really what's needed, but that is prohibitory expensive. ER optics alone make my eyes water with their pricing, and that's before you talk about getting a box with a big enough back plane to support 2x 100G + whatever 10G ports.

    1. Peter W.

      "They're quite cheap when it comes to 10G pricing"... so, a $25k switch with a cost of $500+/port (Juniper QFX3500 48-port) or a $40k switch with a cost of $825+/port (Cisco Nexus 3064 48-port) is "cheap"? Compared to a Mellanox MSX1016X-2BFR 64-port for $12k with cost of less than $187.5/port (multiple sites listing this switch at under $12k) or a SuperMicro SSE-X3348SR 48-port for $16k with cost of less than $350/port.

      1. Nate Amsden

        yeah I don't know why this article author seems to think you can't find cheaper 10GbE (I've mentioned this in one or more of his past articles). I just dug up a quote from a pair of switches I bought on Jan 9 2012, with average discounting, each one came to $9,354 (w/1 AC PSU) $194/port, or $9,711 with 2 AC PSUs. Extreme Networks X670-48x w/o PHY - as the model implies it is a 48 port switch. (X670V has a PHY and has an expansion slot for 40Gbps uplinks and thus costs in the $16-18k range after discount). Short of the lack of 40Gbps uplinks the X670-48x can't do long distance cabling

        I'd imagine prices are a bit cheaper now since my pricing is a year old at this point.

        The X670-48x is a full layer 3 switch(IPv4/v6), line rate all ports, supports stacking over 10GbE (up to 8 switches), very easy to manage(I have been an Extreme customer for about 12 years now). Software upgradable to support things like BGP if that floats your boat (other L3 protocols like basic OSPF RIP/RIPng, my favorite protocol ESRP, VRRP). And for those that want SDN and openflow support is there for that as well(integrates with bigswitch).

        If you need something really big they can go up to 768x10GbE ports in 1/3rd of a rack (2x denser than anyone else still I think) line rate @ 20Tbps.

        The UI can be a culture shock for those used to Cisco/Juniper-style, I prefer the simpler UI, I don't want to perform 50 commands when I can do it in a fraction of the commands that read like english.

        http://www.techopsguys.com/2009/09/29/simple-network-management/

  8. pixl97

    TCP Incast

    I read this article on Erlang and TCPincast and imagine application issues like this will cause the migration to 10G-E sooner then many people will think.

    http://www.snookles.com/slf-blog/2012/01/05/tcp-incast-what-is-it/

    This page is even better at describing the issue. http://www.pdl.cmu.edu/Incast/

    Sometimes it's easier to throw more hardware at the problem then fix the nature of the problem.

  9. Christian Berger

    It's probably 1Gig which is the odd one out

    The step between 10 and 100 MBit Ethernet took 1-2 decades depending on how you look at it. One of the problem was that old coaxial installations couldn't be used for 100 MBit Ethernet. Somehow the step to Gigabit Ethernet was a _lot_ quicker, taking only about half a decade to get some decent market share.

    Maybe it has something to do with cabling. Gigabit Ethernet worked on the same wiring as 100 MBit ran. (with some rare exceptions of ultra-cheap installations) 10Gb Ethernet now needs new cabling, new plugs, perhaps even a move to fibres. So you cannot just pop your GigE Switch out, install a 10 GigE one and have 10 GigE on the ports you need it.

    1. LB45
      Thumb Up

      Re: It's probably 1Gig which is the odd one out

      Spot on. Outside of the data center or core to distribution runs, the cost of upgrading the cabling to support 10G just isn't going to happen anytime soon. Last I recall you still couldn't go very far on copper, TX6A is only 70 meters which isn't a drop in replacement for 5E (100 meters) so it won't fit in existing runs.

      Plus it's damned expensive to boot and requires even more expensive and carefully certified terminations.

  10. Anonymous Coward
    Anonymous Coward

    Huawei

    I'm the "just another" guy in the first post.

    I mentioned Huawei because I've spoken to them most recently. The figures I have onhand are about £15k list for a 48port 10GbE TRILL switch and a coment from the sales guys that "we usually sell at about 30% of list price. That will have to rise eventually of course" - which works out at about 105 quid per port.

    As mentioned previously, that's with full IP services.

    A Cisco 3750X-48 with everything enabled costs more than that even with "fantastic 85% discount"(*), let alone similar Cisco 10GbE kit, for which I'm being quoted £30-35k with discounts

    (*) Yet a Cisco SGE-500 which has apparently better specs is £1000. Go figure. They handle multicast better too.

    Of course there are other players in the market. I only started looking seriously for 10GbE kit late in 2012 after getting committments to the spend - it's as much for crosscampus connectivity as for server rooms which is why TRILL is important to me. Serverside you can only LACP bundle up so many 1Gb/s links(**) and they don't work anywhere near as well as a single higher bandwidth connection.(***)

    (**) 8 per bond group on most switches and they're effectively running at max capacity on our servers.

    (***) The best you can usually achieve is 1Gb/s per client-server pair. That isn't fast enough when handling terabytes of imaging.

    Machine-side, 10GbE cards aren't particularly expensive when compared to multiport GbE NICs, nor when compared to FC HBAs

    10GbE isn't needed for small deployments. I'm using GbE at home but only because cheap unmanged 5-8 port switches are less than a tenner these days. It makes a difference when streaming HD stuff from the media server but how many domestic setups have one of those?

    If you need managed GbE, then budget 100-150 quid for 16-24 ports and even the cheapest/nastiest switch is virtually impossible to saturate unless everything is broadcasting. GbE is commodity these days. 500 quid gets you a 48-port Cisco/Linksys SGE300 which is a pretty respectable box with a CLI interface. 200 more gets a SGE500 which is stackable (other makers are available, of course)

    Yes, 10GbE doesn't go far on copper, but 10-20 metres on CAT5 is enough for racktop and 45 on cat6 is enough for server rooms. Desktop endpoints seldom need more than 1Gb/s. Anything needing faster than that tends to be too hot/noisy for an office environment.

This topic is closed for new posts.

Other stories you might like