back to article Do we really want 100Gig Ethernet?

Remember when Ethernet networks were invented? Probably not: it was over 30 years ago, after all, and you are probably too young. Even if you are not, you have probably dismissed from memory the woefully inadequate 10 meg of bandwidth on offer at the time – less than you get with most broadband services these days. Still, the …

COMMENTS

This topic is closed for new posts.

Page:

  1. Anonymous Coward
    Anonymous Coward

    This old chestnut

    Run enough VMs and you run into I/O bottlenecks. Install hybrid ssd/hdd setups to increase I/O ten fold and you run into network bottlenecks. Increase network performance and you run into processing speed bottlenecks. Increase processing speed and you run into I/O bottlenecks and so on and so on.

    There will always be a need. If not immediately in the very near future.

    And remember, 640KB is enough for everyone.

    1. Ammaross Danan
      FAIL

      Numpty

      You forget FCoE or NAS/SAN solutions that run over ethernet. Simply running no-local-storage servers with 128GB+ of RAM and hosting 10-20+ VMs is enough to saturate 1Gb, and likely 10Gb ethernet.

      That and you still believe BG actually said the 640KB quote I assume?

      1. mfraz

        Maybe not

        Maybe he didn't say that, but he did ask what a network was when being shown computers connected using Econet at Acorn's HQ in the mid 80s.

  2. Piro Silver badge
    Pint

    Well, of course we want it

    It's not for home user, or a single user PC.. Honestly, standard Gigabit Ethernet is about enough to saturate a single drive, but rather for large machines running a lot of virtual machines, or backbones between server rooms.

    It means less cabling in the long run, and saving copper has to be a good thing.

    1. Anonymous Coward
      Anonymous Coward

      At my old place...

      At my old place, I would have loved to have my Networker storage nodes/Netbackup media servers attached to 100GigE, that way I wouldn't need to have tape drives attached to individual big servers or mount servers. Mount all your file system snapshots up on a single 100GigE attached machine and pipe it all to your general media servers, while using a standard IP client. Sorted.

  3. Matthew 17

    you need that sort of bandwidth for the Mac App Store

    having to get all your software from the 'cloud' is going to be tedious with anything slower.

  4. Dazed and Confused
    Happy

    Answers

    Do I remember, Yes.

    10Mb faster than average home broadband? I wish, actually I'm visiting South Korea at the moment and my friend here has 100Mb, but the contention ratio is so bad he rarely gets 2. So perhaps my 4ish isn't so bad.

    Do we want 100GB? Yes

    When? NOW, if now last week. And run to my home please, at reasonable contention ratios so I actually get close to it at times. Fibre all the way would be nice to drop a whole chunk of the latency I see in my broadband connection, the exchange in 3ms not 20.

    1. Anonymous Coward
      Anonymous Coward

      Latency

      Fiber actually has higher latency than copper. Fiber is just able to go farther distances and have faster speeds.

      To use easy time numbers and not indicative of actually being able to go that distance over the media, this is how it breaks down. In 1 ms, a signal will travel 160km over copper and 100km over fiber. With all things being equal, fiber to your house would not help latency times.

      1. Anonymous Coward
        Mushroom

        Nost so slow there Skippy..

        Your figures are wrong.

        The difference is negligible and is likely to be caught up in the 'typical' figures in the datasheets.

        Fibre typically has an index of refraction around 1.5 which translates to a velocity factor of approximately 0.66 and CAT5e velocity factor is typically 0.6-0.75 so near as damn it identical in terms of raw speed.

        I'd suggest that any major difference in speed between fibre and copper is caused by the transmitter/receiver pair at either end of the fibre that do the conversion from electrical signal to light and vice versa and if you're really seeing the differences you quote on your network then I suggest you start asking some sharp questions of your vendor

  5. Jon Press

    10,000 times faster...

    is about the same as original Ethernet was compared with a typical modem link of the time - 1200baud (yes, I was there, drilling holes in thick yellow coax to install the Ethernet taps). It wasn't used to replace modem links, of course (though it did eliminate a lot of internal RS232 wiring), but to open up a whole new range of distributed applications that weren't hitherto possible. Did we need it - no; could we have done without it a year later - no.

  6. Andy Fletcher
    Thumb Up

    Thanks Reg

    For making me feel like a dinosaur just because I remember things like 10Mb.

    Do we need 100Gb? I remember playing Quake on a 56Kbps modem without lag. Now I have Megabits, I lag like a bitch. I'm not sure more speed is a better answer than more conscientious use of what we've got already.

  7. Leeroy
    Thumb Up

    Faster

    In the immortal words of Daft Punk.

    Work it harder, make it better

    Do it faster, makes us stronger

    More than ever, Hour after

    Our work is never over

  8. JasonW
    Boffin

    ... the woefully inadequate 10 meg of bandwidth on offer at the time ...

    Yes but also at the time there weren't any widely available or more importantly affordable servers capable of handling more than about 3MBps throughput without slowing the rest of the server down considerably due to servicing the interrupts on the NIC (remember - they were pretty much all ISA NICs back then - running at 6 or 8MHz)

    1. Al 4

      Buss speed verses processor speed

      I remember when 12mhz mother boards because available and I was asked why I wanted a 10mhz motherboard for the server I was putting together rather than a 12mhz one. My response which puzzled them was the 12mhz motherboard only has a 4mhz buss speed while the 10 has a 5. I/O speed. This was the most important thing for a server along with memory for caching since the slower processor actually provided a higher throughput. Since networks are shared on the server side the higher the bandwidth the better individual throughput will be when accessing the server. The the only bottleneck will be how fast the server can service the requests. It will be interesting to see what kind of NIC's and motherboards will be created to deal with the higher throughput needs of a faster network.

    2. Giles Jones Gold badge

      Coax

      Not to mention coax with BNC in many cases. Often in a single bus so only one machine could talk at any one time.

  9. Anonymous Coward
    Anonymous Coward

    MLT

    Our butt has been saved by MLT for the while. Easier to make better use of existing fibre than invest in newer, faster single GBICs. Digging up the tarmac isn't cheap.

    Most of our links to external buildings are running 2 x 1Gb optics in a MLT (different routes for redundancy) and these are happily supporting offices of around forty-ish people on the end of each MLT.

    In fact, thinking about this, our main office switch stacks have about120 people/devices on each edge stack and those are linked via the same two 1Gb optics to the core switch. And we have bandwidth to spare. Those lines carry VOIP as well as data.

    For normal office duties, we're perfectly happy at these rates. As far as the servers go, the Citrix farm doesn't strain the network outside itself, so long as it has adequate access to the core storage array.

    If we started dealing in major video enterprises, then I think we might have to look at this again, but other than that; we're sound and not envisaging needing anything more for a good few years in to the future.

    1. Anonymous Coward
      Anonymous Coward

      Hmm...

      MLT? Nah, DWDM - you can stick multiple services of differing types down it, no bother. FC, Backup LAN, Prod LAN, DMZ, iSCSI, FCoIP, all totally separated - not just virtually - down a single dark fibre.

  10. lurker

    The horror

    Brings to mind the mess of BNC network cable, with T-pieces, terminators and coax. Token ring networking! Ethernet and RJ45 were most definitely an improvement :).

    1. Dances With Sheep
      Unhappy

      Network coax?

      Bastard!

      I'd banished the nightmare of coax to the back of my mind..... alongside memories of Highlander II and other things best forgotten about the early 90's

      1. Steve X

        co-ax wasn't so bad

        Until some b*stard dropped some 93 ohm Arcnet co-ax into the bin for the 50-ohm Ethernet. Boy did that take a while to find. Everything worked just fine until you hit some threshold, then collisions went through the roof & the network dropped to 1200bit/s.

        This was mid-80's, not even into the 90's. When you could statmux 4 TTY connections on a 9600 bit/s private wire & think you were getting something good...

        Where's the "I'm feeling old" icon?

        1. Mike Timbers
          Pint

          That happened to me too!

          Yes, I remember early Ethernet, when TCP/IP was an add-on to Xenix and Excelan cards cost £800. Arcnet was great but yes, the terminators being different was a royal PITA.

          IPX was easy until I got a batch of cheap NICs where two had the same MAC address.

          And Thick-net was fine once drop-cables became cheap enough that my boss didn't force me to make them by hand with soldering iron in hand!

    2. Giles Jones Gold badge

      BNC had advantages

      I did run a network lead out into the garage which was some distance away using TV aerial cable and that worked. Probably due to having very tolerant NICs.

      1. Goat Jam
        Mushroom

        OMG

        Fucking coax.

        it wasn't so much the slow 10Mb speeds, not even the ever present network contention issues, it was the numpty who decided to take their computer home over the weekend and disconnected both sides of the T-piece, thereby bringing the entire segment down.

        Cue the daily tour of offices to determine who was this weeks failpoint and consequent laughing stock of the office.

        Don't get me started on the cases where a cable segment was not crimped properly, or had been damaged by a ham fisted user once too often, causing LAN segments to disappear and reappear with no apparent logic until you somehow managed to track it down to the office in marketing with that guy who is always fiddling with his PC.

        Oh, and yes, we do want 100G ethernet. Why the hell not?

        P.S. 10Mbit Internet? I fucking wish. I'm stuck on 1.5Mbit with no relief on the horizon.

        Not to mention no access to naked DSL.

        Apparent;y I'm not allowed to have it. It's "not supported by my exchange" or some such crap.

        So I'm stuck paying $80 bucks a month for 1.5 Mbit, @ 100+100 plus another $40 bucks a month for fricking "line rental" for a phone line that is not used.

        And that is the cheapest plan I've found.

        Do I sound angry? Damn right I am.

        Quite frankly, I'm sick of hearing about how "most people get 10Mbit internet" because where I live that is a load of crap.

        I'm not in the middle of nowhere, I'm 40 minutes from work and that is in the the city and I don't get a decent option.

        I'm stuck with an expensive yet crappy internet connection that would embarrass a North Korean *

        In fact I don't think a single Australian ISP offers bandwidth at that level to non commercial customers. **

        * Well actually, probably not, they are more concerned with finding their next meal I'm sure but I'm still pretty pissed nonetheless.

        ** except maybe the tiny number of people who have thus far received access to the much maligned NBN trial network.

  11. John Tserkezis

    I remember...

    ...when 10mb coax was all the rage. We had four segments like that because we couldn't meet the 100m length limit otherwise. And some guy would kick the cable under his desk twice a day and bring the network down every bloody time.

    Ahh, for the good ol' days. I'm glad they're over.

    Do we need more speed? Heck yeah. Not for the prices they're asking though. Would stress the budget, and the ROI would be difficult to justify to the beancounters who are going to want to know how much money will be saved now that everyone isn't going to be twiddling their thumbs as long....

    1. Anonymous Coward
      Anonymous Coward

      100m?

      Surely you mean 185m for CO-AX and had you never heard of repeaters?

      Yes, yes, I'm old and sad enough to remember.

  12. Anonymous Coward
    Anonymous Coward

    @The horror

    "Brings to mind the mess of BNC network cable, with T-pieces, terminators and coax"

    Stop muttering about all that new fangled stuff ... as someone else has already said, proper ethernet was yellow cables almost as thick as your arm that you had to drill vampire taps into to connect!

    1. JohnG

      Vampire taps

      "... that you had to drill vampire taps into to connect!"

      In a recent rummage in my cellar, I stumbled across a hand tool for drilling the holes needed for vampire taps, still in it's box.

      1. Anonymous Coward
        Anonymous Coward

        In it's box?

        Best place for it. Evil fkin things.

  13. SImon Hobson Bronze badge

    Ahh, the memories ...

    Like Andy Fletcher, thanks for reminding me of the passage of time !

    Putting on my pedants hat, wasn't the original Ethernet, as developed at PARC, only 4mbps ? Though I guess that probably never made it into the wild.

    I recall when even thinnet (aka 10base2, that thin coax stuff) was considered hi-tech and high speed - well it was compared to the 233kbps Apple LocalTalk I was used to. I also recall the fun of troubleshooting a bad connection <somewhere> in the middle of it !

    Like earlier improvements, some will need the speed and be prepared to pay for it, others will wait until prices come down.

  14. /dev/rant
    WTF?

    "Do we really want 100Gig Ethernet?"

    Hell, yeah!

    What sort of silly question is that?

  15. GCom
    Unhappy

    Its all about the latency

    Dealing with a lot of HPC workloads its all about the latency for our apps. The business case for 10Gb was easy as it offered a massive step forward in latency reduction thanks to the increased clock rate. All the implementations of 40 & 100GB ive seen to date are nothing more than multiple 10GB channels bonded together much like DWDM therefore not achieving any lower latency. While im sure there will be latency gains mainly in the protocol signaling it certainly wont be the orders of magnitude gains we saw from 1->10GB so any business case will have to revolve around high bandwidth areas such as uplinks and storage not the server level

    1. Michael H.F. Wilkinson Silver badge

      Spot on!

      Latency kills distributed computing far more than pure bandwidth.

      Still remember coax though, and 300 baud connections to the Cyber 170/760 (AARGH)

      1. Dazed and Confused

        Bandwidth V latency

        Quote from one of the denizens of comp.arch many eons ago.

        Bandwidth is easy, you can always buy more width.

        Latency is the problem, you can't bribe God.

  16. Anonymous Coward
    Anonymous Coward

    GigE may be standard on the desktop...

    ... that doesn't mean many really need it. Plenty of deployed workgroup switches and home router thingies and such aren't GigE and won't be for a while yet. That doesn't mean it's not convenient to have anyway and a boon for those who really do need it, though.

    And, of course, often enough the upgrades are nice for the latency. Though HPC would sensibly usually use something else, preferrably something engineered with less leaky buckets and more latency and arrival guarantees.

    There's much more at work than 'need'. There's quite a lot of 'want'* and beyond that there's scale economies, marketing, 'what everyone else does', and suchlike.

    So of course it's going to happen. This isn't universally positive. It's all too easy to forget that building an AJAX-heavy website on a local server won't perform quite that nicely on the proverbial wet string links -- links that make 10Mbit ethernet run over barbed wire look fast, loss-free, and generally snappy -- just like how browsers get universally slower with every release unless measured on the very latest of hardware, which is what happens so speed increases can be chalked down. This reflex of throwing more hardware at the problem I would argue is ultimately a detriment on innovation. So some testing on slower hardware is in fact quite the positive thing for all around.

    * Had a guy demand I upgrade the entire office network to GigE because his shiny brand new laptop had GigE. This was 2004 or so. He didn't like my reply that I'd love to except that I'd need some 30k-odd EUR budget, not counting the wiring, to make it happen. You know the drill; 'not a team player' and no shiny new laptop-y bribes either. Yes, that was one of the co-founders. Greedy selfish bugger.

  17. Nate Amsden

    all about the back haul

    If you have a 48-port 10GigE switch and want to uplink it to an upstream device like a router, 100GbE can make sense if your really looking for not oversubscribing. Though the number of folks that will need that is practically 0 I suspect.

    100GbE is and will be for probably at least 15 years all about the back haul, whether it's between data center switches, or between routers in(between) a service provider, less links between devices is less cost, especially when modern standards allow "only" 8 links to be bonded together. (MLAG has a way around this I hear though I suspect not widely deployed in service providers)

    1. Lance 3

      More than just the number being bonded

      Most equipment also has a limit to the number of lags you can have. So while you are still limited to 8, you can run out of bonding groups/lags before you run out of ports. MLAG doesn't resolve this issue. LAG's also don't help single stream performance either.

  18. Anonymous Coward
    Joke

    <title/>

    Right. I had to get up in the morning at ten o'clock at night half an hour before I went to bed, drink a cup of sulphuric acid, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad and our mother would kill us and dance about on our graves singing Hallelujah.

    And you try and tell the young people of today that ..... they won't believe you.

  19. jason 7

    Ahem.......

    ....can they get the 10/100 and 1GB standards working first?

    1. Anonymous Coward
      Anonymous Coward

      to make the existing standards work...

      stop switching off auto-negotiation unless your hardware is *really old* with duff NICs that can't do it properly.

      Some modern 100Mb/s kit gets confused if it doesn't see the auto-neg signals and gig requires it anyway so leave it alone :)

      1. Fuzzysteve
        Angel

        not confused, exactly...

        It's not getting negotiation, so it defaults to the lowest supported. 10 half. Which will collision like crazy when the other side is set to 100 full.

    2. Anonymous Coward
      FAIL

      They have...

      I assume you haven't though!

  20. Anonymous Coward
    Anonymous Coward

    Not too young to remember.

    I also remember that around the time 10 meg ethernet was available that a 10 megabyte hard disk drive was about all you could get. So, to keep up it isn't 100 gig we want, it's several tera!

    1. Steven Jones

      Slightly misleading...

      If you compare the sequential throughput of a modern 1TB drive with that of a 10MB one, then that has only gone up about a 300 fold, or rather less than Ethernet throughput has in the same period. The reason for this is quite simple - disk capacity goes up as to a square of linear bit density (so capacity has gone up 100,000-fold) whilst sequential throughput goes up linearly. In fact it's misleading to claim that Ethernet is only 1,000 fold faster - that 10Mb used to be delivered in a single collision domain (and if there were many hosts it was impossible to get anything near 10Mb). A modern LAN implemented with non-blocking switches containing many hundreds or thousands of switches. Indeed there are enterprise switches rated at several terabits - just not all down the same wire at the same time. So (total) network capacity has gone up at a much higher rate than the data rate of a single Ethernet interface.

      1. Anonymous Coward
        Anonymous Coward

        somewhat.

        re: "(and if there were many hosts it was impossible to get anything near 10Mb)"

        Nope, if you ran a modified ethernet driver (which cut the preamble in half) you could insure that yours was the only traffic on the network*.

        * until you get caught.

  21. Big_Boomer Silver badge
    Thumb Up

    Of course we need 100Gb/s Ethernet

    How else will we be able to download SUHD (Super-Ultra-High-Definition) 3D porn movies to store on our 1000Tb USB12 external drive, to later view on our 200" LED wipe clean wallpaper screens.

    Why only 200" screen you ask? Well, that's the size of the biggest wall in my living room. ;-)

  22. Jeff 11

    Infiniband

    4x link-aggregated Infiniband is effectively 100Gbit already. Yes, it's 4x the ports and cables instead of 1, but when you're talking about backbone infrastructure the cost of doing this is far less than bonding 10GbE links. It's lower latency too, if you sacrifice IP on the backbone - and that could be a very desirable thing for a VM I/O infrastructure.

  23. Dropper
    Go

    Too fast?

    Seems too similar to the statement "No one will ever need more than a megabyte of RAM". The original PC design, which crippled all subsequent PCs until WinNT was released, only had 1 megabyte of RAM available for use, nearly half of which was used the OS. 99% of Mainstream users didn't get the benefit having more than about 520-540K until Windows 98 was put to bed when XP took over from Windows 2000. All because an idiot at IBM decided no one would ever need more than a megabyte of RAM. The fact even the oldest 1985 Amigas had a more advanced OS than your average pre-WinXP PC always made me smile.

    So do we need more than 100 Gigabit network speed? How long does it take to download a 1080P movie to your hard disk from any kind of server? If it's over a minute, we need faster networking. The movie thing might not be relevant to most businesses, but there's two points to be made here. First, data transfers of that size are common in the corporate world, even if it isn't always a pirated blu-ray being moved to a netapp. The second is that if the business world doesn't get 1080P-in-a-minute capable networking, then home users are never going to get that speed either. And when we've signed up to watch the next generation of movie broadcast to our 60" TVs at 2-4 times the current HD resolution, at the same time the movie is premiering in cinemas.. things will probably fall a little flat when the word "buffering" appears on our screens every minute or so.

    1. M Gale

      Not 1MB

      640KB. Or KiB, in new money. 10 * 64KiB pages if you like.

      Later hacks both in software and in the CPU allowed you to address 1MiB through the himem.sys driver, and still later hacks such as ems and xms (and the lovely DOS4GW used by Doom and others) allowed vastly more RAM, however the 640KiB limit was still there and any more RAM available was effectively gotten at through hacks.

      Anyway, XP didn't take over from 2000. Both are based on the NT5 kernel, and if I remember rightly 2000 was the "professional" variant used in the office, whereas XP was aimed more at home users. Funny that most of my games ran on 2000 better than on XP though.

      Mucking about with config.sys and autoexec.bat and emm386.exe switches and loadhigh statements to get 570,000 bytes of conventional RAM and 4MiB of ems so I could run Second Reality with a soundblaster... I feel old. Damn you.

      1. Anonymous Coward
        Anonymous Coward

        You made me type this

        2000 Pro was XP's failure to be ready.

        Home users got Windows ME instead.

        On the topic of networking, home users then got XP Home, which, in Microsoft's greatest ironic act, brought NTFS security to the masses while simultaneously removing the ability to password-protect Network shares*.

        * In the by-the-average-user-through-the-GUI sense.

    2. John Angelico
      Boffin

      Historically inaccurate...

      You said:

      "The original PC design, which crippled all subsequent PCs until WinNT was released, only had 1 megabyte of RAM available for use,..."

      which is both hystorically(sic) and hysterically inaccurate.

      You conflate software and hardware, and you forget OS/2, and other OSs which also used a flat memory model (Linux, MacOS on the Motorola 68000 and mainframes too).

      The hardware for a flat memory model in the PC era was available from the 386 chip onwards (realistically) although almost from the 186 in fact. It was only the 8086 which had "the problem".

Page:

This topic is closed for new posts.