back to article Do we really want 100Gig Ethernet?

Remember when Ethernet networks were invented? Probably not: it was over 30 years ago, after all, and you are probably too young. Even if you are not, you have probably dismissed from memory the woefully inadequate 10 meg of bandwidth on offer at the time – less than you get with most broadband services these days. Still, the …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    This old chestnut

    Run enough VMs and you run into I/O bottlenecks. Install hybrid ssd/hdd setups to increase I/O ten fold and you run into network bottlenecks. Increase network performance and you run into processing speed bottlenecks. Increase processing speed and you run into I/O bottlenecks and so on and so on.

    There will always be a need. If not immediately in the very near future.

    And remember, 640KB is enough for everyone.

    1. Ammaross Danan
      FAIL

      Numpty

      You forget FCoE or NAS/SAN solutions that run over ethernet. Simply running no-local-storage servers with 128GB+ of RAM and hosting 10-20+ VMs is enough to saturate 1Gb, and likely 10Gb ethernet.

      That and you still believe BG actually said the 640KB quote I assume?

      1. mfraz

        Maybe not

        Maybe he didn't say that, but he did ask what a network was when being shown computers connected using Econet at Acorn's HQ in the mid 80s.

  2. Piro Silver badge
    Pint

    Well, of course we want it

    It's not for home user, or a single user PC.. Honestly, standard Gigabit Ethernet is about enough to saturate a single drive, but rather for large machines running a lot of virtual machines, or backbones between server rooms.

    It means less cabling in the long run, and saving copper has to be a good thing.

    1. Anonymous Coward
      Anonymous Coward

      At my old place...

      At my old place, I would have loved to have my Networker storage nodes/Netbackup media servers attached to 100GigE, that way I wouldn't need to have tape drives attached to individual big servers or mount servers. Mount all your file system snapshots up on a single 100GigE attached machine and pipe it all to your general media servers, while using a standard IP client. Sorted.

  3. Matthew 17

    you need that sort of bandwidth for the Mac App Store

    having to get all your software from the 'cloud' is going to be tedious with anything slower.

  4. Dazed and Confused
    Happy

    Answers

    Do I remember, Yes.

    10Mb faster than average home broadband? I wish, actually I'm visiting South Korea at the moment and my friend here has 100Mb, but the contention ratio is so bad he rarely gets 2. So perhaps my 4ish isn't so bad.

    Do we want 100GB? Yes

    When? NOW, if now last week. And run to my home please, at reasonable contention ratios so I actually get close to it at times. Fibre all the way would be nice to drop a whole chunk of the latency I see in my broadband connection, the exchange in 3ms not 20.

    1. Anonymous Coward
      Anonymous Coward

      Latency

      Fiber actually has higher latency than copper. Fiber is just able to go farther distances and have faster speeds.

      To use easy time numbers and not indicative of actually being able to go that distance over the media, this is how it breaks down. In 1 ms, a signal will travel 160km over copper and 100km over fiber. With all things being equal, fiber to your house would not help latency times.

      1. Anonymous Coward
        Mushroom

        Nost so slow there Skippy..

        Your figures are wrong.

        The difference is negligible and is likely to be caught up in the 'typical' figures in the datasheets.

        Fibre typically has an index of refraction around 1.5 which translates to a velocity factor of approximately 0.66 and CAT5e velocity factor is typically 0.6-0.75 so near as damn it identical in terms of raw speed.

        I'd suggest that any major difference in speed between fibre and copper is caused by the transmitter/receiver pair at either end of the fibre that do the conversion from electrical signal to light and vice versa and if you're really seeing the differences you quote on your network then I suggest you start asking some sharp questions of your vendor

  5. Jon Press

    10,000 times faster...

    is about the same as original Ethernet was compared with a typical modem link of the time - 1200baud (yes, I was there, drilling holes in thick yellow coax to install the Ethernet taps). It wasn't used to replace modem links, of course (though it did eliminate a lot of internal RS232 wiring), but to open up a whole new range of distributed applications that weren't hitherto possible. Did we need it - no; could we have done without it a year later - no.

  6. Andy Fletcher
    Thumb Up

    Thanks Reg

    For making me feel like a dinosaur just because I remember things like 10Mb.

    Do we need 100Gb? I remember playing Quake on a 56Kbps modem without lag. Now I have Megabits, I lag like a bitch. I'm not sure more speed is a better answer than more conscientious use of what we've got already.

  7. Leeroy
    Thumb Up

    Faster

    In the immortal words of Daft Punk.

    Work it harder, make it better

    Do it faster, makes us stronger

    More than ever, Hour after

    Our work is never over

  8. JasonW
    Boffin

    ... the woefully inadequate 10 meg of bandwidth on offer at the time ...

    Yes but also at the time there weren't any widely available or more importantly affordable servers capable of handling more than about 3MBps throughput without slowing the rest of the server down considerably due to servicing the interrupts on the NIC (remember - they were pretty much all ISA NICs back then - running at 6 or 8MHz)

    1. Al 4

      Buss speed verses processor speed

      I remember when 12mhz mother boards because available and I was asked why I wanted a 10mhz motherboard for the server I was putting together rather than a 12mhz one. My response which puzzled them was the 12mhz motherboard only has a 4mhz buss speed while the 10 has a 5. I/O speed. This was the most important thing for a server along with memory for caching since the slower processor actually provided a higher throughput. Since networks are shared on the server side the higher the bandwidth the better individual throughput will be when accessing the server. The the only bottleneck will be how fast the server can service the requests. It will be interesting to see what kind of NIC's and motherboards will be created to deal with the higher throughput needs of a faster network.

    2. Giles Jones Gold badge

      Coax

      Not to mention coax with BNC in many cases. Often in a single bus so only one machine could talk at any one time.

  9. Anonymous Coward
    Anonymous Coward

    MLT

    Our butt has been saved by MLT for the while. Easier to make better use of existing fibre than invest in newer, faster single GBICs. Digging up the tarmac isn't cheap.

    Most of our links to external buildings are running 2 x 1Gb optics in a MLT (different routes for redundancy) and these are happily supporting offices of around forty-ish people on the end of each MLT.

    In fact, thinking about this, our main office switch stacks have about120 people/devices on each edge stack and those are linked via the same two 1Gb optics to the core switch. And we have bandwidth to spare. Those lines carry VOIP as well as data.

    For normal office duties, we're perfectly happy at these rates. As far as the servers go, the Citrix farm doesn't strain the network outside itself, so long as it has adequate access to the core storage array.

    If we started dealing in major video enterprises, then I think we might have to look at this again, but other than that; we're sound and not envisaging needing anything more for a good few years in to the future.

    1. Anonymous Coward
      Anonymous Coward

      Hmm...

      MLT? Nah, DWDM - you can stick multiple services of differing types down it, no bother. FC, Backup LAN, Prod LAN, DMZ, iSCSI, FCoIP, all totally separated - not just virtually - down a single dark fibre.

  10. lurker

    The horror

    Brings to mind the mess of BNC network cable, with T-pieces, terminators and coax. Token ring networking! Ethernet and RJ45 were most definitely an improvement :).

    1. Dances With Sheep
      Unhappy

      Network coax?

      Bastard!

      I'd banished the nightmare of coax to the back of my mind..... alongside memories of Highlander II and other things best forgotten about the early 90's

      1. Steve X

        co-ax wasn't so bad

        Until some b*stard dropped some 93 ohm Arcnet co-ax into the bin for the 50-ohm Ethernet. Boy did that take a while to find. Everything worked just fine until you hit some threshold, then collisions went through the roof & the network dropped to 1200bit/s.

        This was mid-80's, not even into the 90's. When you could statmux 4 TTY connections on a 9600 bit/s private wire & think you were getting something good...

        Where's the "I'm feeling old" icon?

        1. Mike Timbers
          Pint

          That happened to me too!

          Yes, I remember early Ethernet, when TCP/IP was an add-on to Xenix and Excelan cards cost £800. Arcnet was great but yes, the terminators being different was a royal PITA.

          IPX was easy until I got a batch of cheap NICs where two had the same MAC address.

          And Thick-net was fine once drop-cables became cheap enough that my boss didn't force me to make them by hand with soldering iron in hand!

    2. Giles Jones Gold badge

      BNC had advantages

      I did run a network lead out into the garage which was some distance away using TV aerial cable and that worked. Probably due to having very tolerant NICs.

      1. Goat Jam
        Mushroom

        OMG

        Fucking coax.

        it wasn't so much the slow 10Mb speeds, not even the ever present network contention issues, it was the numpty who decided to take their computer home over the weekend and disconnected both sides of the T-piece, thereby bringing the entire segment down.

        Cue the daily tour of offices to determine who was this weeks failpoint and consequent laughing stock of the office.

        Don't get me started on the cases where a cable segment was not crimped properly, or had been damaged by a ham fisted user once too often, causing LAN segments to disappear and reappear with no apparent logic until you somehow managed to track it down to the office in marketing with that guy who is always fiddling with his PC.

        Oh, and yes, we do want 100G ethernet. Why the hell not?

        P.S. 10Mbit Internet? I fucking wish. I'm stuck on 1.5Mbit with no relief on the horizon.

        Not to mention no access to naked DSL.

        Apparent;y I'm not allowed to have it. It's "not supported by my exchange" or some such crap.

        So I'm stuck paying $80 bucks a month for 1.5 Mbit, @ 100+100 plus another $40 bucks a month for fricking "line rental" for a phone line that is not used.

        And that is the cheapest plan I've found.

        Do I sound angry? Damn right I am.

        Quite frankly, I'm sick of hearing about how "most people get 10Mbit internet" because where I live that is a load of crap.

        I'm not in the middle of nowhere, I'm 40 minutes from work and that is in the the city and I don't get a decent option.

        I'm stuck with an expensive yet crappy internet connection that would embarrass a North Korean *

        In fact I don't think a single Australian ISP offers bandwidth at that level to non commercial customers. **

        * Well actually, probably not, they are more concerned with finding their next meal I'm sure but I'm still pretty pissed nonetheless.

        ** except maybe the tiny number of people who have thus far received access to the much maligned NBN trial network.

  11. John Tserkezis

    I remember...

    ...when 10mb coax was all the rage. We had four segments like that because we couldn't meet the 100m length limit otherwise. And some guy would kick the cable under his desk twice a day and bring the network down every bloody time.

    Ahh, for the good ol' days. I'm glad they're over.

    Do we need more speed? Heck yeah. Not for the prices they're asking though. Would stress the budget, and the ROI would be difficult to justify to the beancounters who are going to want to know how much money will be saved now that everyone isn't going to be twiddling their thumbs as long....

    1. Anonymous Coward
      Anonymous Coward

      100m?

      Surely you mean 185m for CO-AX and had you never heard of repeaters?

      Yes, yes, I'm old and sad enough to remember.

  12. Anonymous Coward
    Anonymous Coward

    @The horror

    "Brings to mind the mess of BNC network cable, with T-pieces, terminators and coax"

    Stop muttering about all that new fangled stuff ... as someone else has already said, proper ethernet was yellow cables almost as thick as your arm that you had to drill vampire taps into to connect!

    1. JohnG

      Vampire taps

      "... that you had to drill vampire taps into to connect!"

      In a recent rummage in my cellar, I stumbled across a hand tool for drilling the holes needed for vampire taps, still in it's box.

      1. Anonymous Coward
        Anonymous Coward

        In it's box?

        Best place for it. Evil fkin things.

  13. SImon Hobson Bronze badge

    Ahh, the memories ...

    Like Andy Fletcher, thanks for reminding me of the passage of time !

    Putting on my pedants hat, wasn't the original Ethernet, as developed at PARC, only 4mbps ? Though I guess that probably never made it into the wild.

    I recall when even thinnet (aka 10base2, that thin coax stuff) was considered hi-tech and high speed - well it was compared to the 233kbps Apple LocalTalk I was used to. I also recall the fun of troubleshooting a bad connection <somewhere> in the middle of it !

    Like earlier improvements, some will need the speed and be prepared to pay for it, others will wait until prices come down.

  14. /dev/rant
    WTF?

    "Do we really want 100Gig Ethernet?"

    Hell, yeah!

    What sort of silly question is that?

  15. GCom
    Unhappy

    Its all about the latency

    Dealing with a lot of HPC workloads its all about the latency for our apps. The business case for 10Gb was easy as it offered a massive step forward in latency reduction thanks to the increased clock rate. All the implementations of 40 & 100GB ive seen to date are nothing more than multiple 10GB channels bonded together much like DWDM therefore not achieving any lower latency. While im sure there will be latency gains mainly in the protocol signaling it certainly wont be the orders of magnitude gains we saw from 1->10GB so any business case will have to revolve around high bandwidth areas such as uplinks and storage not the server level

    1. Michael H.F. Wilkinson Silver badge

      Spot on!

      Latency kills distributed computing far more than pure bandwidth.

      Still remember coax though, and 300 baud connections to the Cyber 170/760 (AARGH)

      1. Dazed and Confused

        Bandwidth V latency

        Quote from one of the denizens of comp.arch many eons ago.

        Bandwidth is easy, you can always buy more width.

        Latency is the problem, you can't bribe God.

  16. Anonymous Coward
    Anonymous Coward

    GigE may be standard on the desktop...

    ... that doesn't mean many really need it. Plenty of deployed workgroup switches and home router thingies and such aren't GigE and won't be for a while yet. That doesn't mean it's not convenient to have anyway and a boon for those who really do need it, though.

    And, of course, often enough the upgrades are nice for the latency. Though HPC would sensibly usually use something else, preferrably something engineered with less leaky buckets and more latency and arrival guarantees.

    There's much more at work than 'need'. There's quite a lot of 'want'* and beyond that there's scale economies, marketing, 'what everyone else does', and suchlike.

    So of course it's going to happen. This isn't universally positive. It's all too easy to forget that building an AJAX-heavy website on a local server won't perform quite that nicely on the proverbial wet string links -- links that make 10Mbit ethernet run over barbed wire look fast, loss-free, and generally snappy -- just like how browsers get universally slower with every release unless measured on the very latest of hardware, which is what happens so speed increases can be chalked down. This reflex of throwing more hardware at the problem I would argue is ultimately a detriment on innovation. So some testing on slower hardware is in fact quite the positive thing for all around.

    * Had a guy demand I upgrade the entire office network to GigE because his shiny brand new laptop had GigE. This was 2004 or so. He didn't like my reply that I'd love to except that I'd need some 30k-odd EUR budget, not counting the wiring, to make it happen. You know the drill; 'not a team player' and no shiny new laptop-y bribes either. Yes, that was one of the co-founders. Greedy selfish bugger.

  17. Nate Amsden

    all about the back haul

    If you have a 48-port 10GigE switch and want to uplink it to an upstream device like a router, 100GbE can make sense if your really looking for not oversubscribing. Though the number of folks that will need that is practically 0 I suspect.

    100GbE is and will be for probably at least 15 years all about the back haul, whether it's between data center switches, or between routers in(between) a service provider, less links between devices is less cost, especially when modern standards allow "only" 8 links to be bonded together. (MLAG has a way around this I hear though I suspect not widely deployed in service providers)

    1. Lance 3

      More than just the number being bonded

      Most equipment also has a limit to the number of lags you can have. So while you are still limited to 8, you can run out of bonding groups/lags before you run out of ports. MLAG doesn't resolve this issue. LAG's also don't help single stream performance either.

  18. Anonymous Coward
    Joke

    <title/>

    Right. I had to get up in the morning at ten o'clock at night half an hour before I went to bed, drink a cup of sulphuric acid, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad and our mother would kill us and dance about on our graves singing Hallelujah.

    And you try and tell the young people of today that ..... they won't believe you.

  19. jason 7

    Ahem.......

    ....can they get the 10/100 and 1GB standards working first?

    1. Anonymous Coward
      Anonymous Coward

      to make the existing standards work...

      stop switching off auto-negotiation unless your hardware is *really old* with duff NICs that can't do it properly.

      Some modern 100Mb/s kit gets confused if it doesn't see the auto-neg signals and gig requires it anyway so leave it alone :)

      1. Fuzzysteve
        Angel

        not confused, exactly...

        It's not getting negotiation, so it defaults to the lowest supported. 10 half. Which will collision like crazy when the other side is set to 100 full.

    2. Anonymous Coward
      FAIL

      They have...

      I assume you haven't though!

  20. Anonymous Coward
    Anonymous Coward

    Not too young to remember.

    I also remember that around the time 10 meg ethernet was available that a 10 megabyte hard disk drive was about all you could get. So, to keep up it isn't 100 gig we want, it's several tera!

    1. Steven Jones

      Slightly misleading...

      If you compare the sequential throughput of a modern 1TB drive with that of a 10MB one, then that has only gone up about a 300 fold, or rather less than Ethernet throughput has in the same period. The reason for this is quite simple - disk capacity goes up as to a square of linear bit density (so capacity has gone up 100,000-fold) whilst sequential throughput goes up linearly. In fact it's misleading to claim that Ethernet is only 1,000 fold faster - that 10Mb used to be delivered in a single collision domain (and if there were many hosts it was impossible to get anything near 10Mb). A modern LAN implemented with non-blocking switches containing many hundreds or thousands of switches. Indeed there are enterprise switches rated at several terabits - just not all down the same wire at the same time. So (total) network capacity has gone up at a much higher rate than the data rate of a single Ethernet interface.

      1. Anonymous Coward
        Anonymous Coward

        somewhat.

        re: "(and if there were many hosts it was impossible to get anything near 10Mb)"

        Nope, if you ran a modified ethernet driver (which cut the preamble in half) you could insure that yours was the only traffic on the network*.

        * until you get caught.

  21. Big_Boomer Silver badge
    Thumb Up

    Of course we need 100Gb/s Ethernet

    How else will we be able to download SUHD (Super-Ultra-High-Definition) 3D porn movies to store on our 1000Tb USB12 external drive, to later view on our 200" LED wipe clean wallpaper screens.

    Why only 200" screen you ask? Well, that's the size of the biggest wall in my living room. ;-)

  22. Jeff 11

    Infiniband

    4x link-aggregated Infiniband is effectively 100Gbit already. Yes, it's 4x the ports and cables instead of 1, but when you're talking about backbone infrastructure the cost of doing this is far less than bonding 10GbE links. It's lower latency too, if you sacrifice IP on the backbone - and that could be a very desirable thing for a VM I/O infrastructure.

  23. Dropper
    Go

    Too fast?

    Seems too similar to the statement "No one will ever need more than a megabyte of RAM". The original PC design, which crippled all subsequent PCs until WinNT was released, only had 1 megabyte of RAM available for use, nearly half of which was used the OS. 99% of Mainstream users didn't get the benefit having more than about 520-540K until Windows 98 was put to bed when XP took over from Windows 2000. All because an idiot at IBM decided no one would ever need more than a megabyte of RAM. The fact even the oldest 1985 Amigas had a more advanced OS than your average pre-WinXP PC always made me smile.

    So do we need more than 100 Gigabit network speed? How long does it take to download a 1080P movie to your hard disk from any kind of server? If it's over a minute, we need faster networking. The movie thing might not be relevant to most businesses, but there's two points to be made here. First, data transfers of that size are common in the corporate world, even if it isn't always a pirated blu-ray being moved to a netapp. The second is that if the business world doesn't get 1080P-in-a-minute capable networking, then home users are never going to get that speed either. And when we've signed up to watch the next generation of movie broadcast to our 60" TVs at 2-4 times the current HD resolution, at the same time the movie is premiering in cinemas.. things will probably fall a little flat when the word "buffering" appears on our screens every minute or so.

    1. M Gale

      Not 1MB

      640KB. Or KiB, in new money. 10 * 64KiB pages if you like.

      Later hacks both in software and in the CPU allowed you to address 1MiB through the himem.sys driver, and still later hacks such as ems and xms (and the lovely DOS4GW used by Doom and others) allowed vastly more RAM, however the 640KiB limit was still there and any more RAM available was effectively gotten at through hacks.

      Anyway, XP didn't take over from 2000. Both are based on the NT5 kernel, and if I remember rightly 2000 was the "professional" variant used in the office, whereas XP was aimed more at home users. Funny that most of my games ran on 2000 better than on XP though.

      Mucking about with config.sys and autoexec.bat and emm386.exe switches and loadhigh statements to get 570,000 bytes of conventional RAM and 4MiB of ems so I could run Second Reality with a soundblaster... I feel old. Damn you.

      1. Anonymous Coward
        Anonymous Coward

        You made me type this

        2000 Pro was XP's failure to be ready.

        Home users got Windows ME instead.

        On the topic of networking, home users then got XP Home, which, in Microsoft's greatest ironic act, brought NTFS security to the masses while simultaneously removing the ability to password-protect Network shares*.

        * In the by-the-average-user-through-the-GUI sense.

    2. John Angelico
      Boffin

      Historically inaccurate...

      You said:

      "The original PC design, which crippled all subsequent PCs until WinNT was released, only had 1 megabyte of RAM available for use,..."

      which is both hystorically(sic) and hysterically inaccurate.

      You conflate software and hardware, and you forget OS/2, and other OSs which also used a flat memory model (Linux, MacOS on the Motorola 68000 and mainframes too).

      The hardware for a flat memory model in the PC era was available from the 386 chip onwards (realistically) although almost from the 186 in fact. It was only the 8086 which had "the problem".

    3. Pseu Donyme

      Actually

      Windows 3.0 lifted the 1M address space limit with support of 80286 protected mode. This, of course still had the 64k segment size limit (offsets into these still being 16 bits), also this was a kludge which relied on switching the processor to real mode for DOS services (disk and network). Windows 95 had Win32 API and linear memory space for applications keeping the aforementioned kludge. XP and Win2000 were a part of NT lineage which was always natively 32-bit w. linear memory space. The sad thing was that DOS / Win 3.x / 9x held back progress for no good reason and, in fact, continues to do so through Microsoft's de facto monopoly which they cemented.

    4. Anonymous Coward
      Flame

      Title

      All because an idiot at IBM decided no one would ever need more than a megabyte of RAM.

      Wut?

      IBM chose the 8088 because they had a history with Intel and had manufacturing rights for 8086-and-derivative chips before they even thought about desktop computing machines. They had the benefit of using commodity parts while not being beholden to a 3rd party supply chain. Intel, in turn, had furnished the 8086 and its derivatives with a 20-bit address bus for no other reason than a bigger bus would have cost more than anyone would have been prepared to pay at the time.

      Assuming you're talking about MS/DOS, it, at least initially, ran in that bottom 640kb. The upper memory area was reserved for firmware and memory-mapped i/o. Whether the decision to trade off really fast i/o for some RAM renders the chap responsible an idiot is a matter of some debate that I rather suspect would go over your head. Of course it turned out that not all of that firmware showed up, so eventually emm386 came along to allow you to shift some of the o/s and supporting programs into that high memory. And then when more than about 5 people could afford more than a megabyte of RAM, software solutions to utilise it started to show up.

      I'm not sure how having paging capabilites that were available as add-ons to MS-DOS at the time Amiga's first firmware hit the shelves makes AmigaOS 'superior' to (by your reckoning) Win2K. Commodore's approach was more integrated and consistent, I suppose, for the reason that at that time Amiga was a closed-tight platform. That's also the reason why it's long-since a stone-dead platform, so it depends on your criteria for evaluating superiority, I suppose.

      Now.

      Where does this 520-540Kb come from?

      And if XP had replaced 2000, how does XP replacing 2000 put 98 to bed?

      ikr?tl;dr. No matter, that's 10 minutes' work solidly avoided.

  24. me n u

    10G territory?

    "Now we are firmly into 10Gig territory"

    Say what? I don't know what's going on in Europe, but here in the states, 10G is in the labs only! 1G is commonplace now, but 10G is waiting in the wings.

    And I can't ever see a need for 100G, or anything faster than 10G, for the everyday user.

    1. M Gale

      You might not..

      ..however for a service provider, 10G is still quite slow. All it takes is 100 users on 100 megabit broadband all deciding to hit a torrent at once and suddenly you have more bandwidth required than a single 10G wire can provide. Sure you can bundle them together, but you could also bundle 100G wires together.

      Being able to actually use that fat pipe coming into your house without a boatload of contention, throttling and other nastiness? Yes please!

      (or alternatively, having the ISP subscribe ten times more users because they suddenly have 100G ethernet, thus completely negating any benefits.. no thanks!)

    2. Jesper Frimann
      Headmaster

      RE:10G territory?

      We are slowly getting there.

      The standard machines we deploying today are still 1Gbit based, but we are going for 10Gbit on our large clients, on their demands btw.

      We still have some 100Mbit connections, but that is normally for appliance type of connections.

      // Jesper

  25. BinaryFu
    Stop

    Please think about the brain cells...

    "The second option is undeniably more future-proof but calls for a much bigger investment in the supporting fibre infrastructure."

    Can we please do away with this nonsensical catchphrase "future-proof"? Unless by "future" one means simple "Next Tuesday."

  26. Charlie Clark Silver badge

    Will no one think of the copper?

    Anything that encourages fibre optic over twisted pair is to be welcomed with open arms.

  27. JC 2
    Go

    (n)GbE On Client PCs

    As more and more client computing devices appear in our daily lives, it's very easy to see the benefit of central data storage rather than redundant client storage each with it's own backup and fault tolerance burden.

    Gigs of memory for caching data are dirt cheap now, as are TB sized HDDs that exceed 100MB/s per drive. Yesterday, let alone today and tomorrow, 1 Gigabit ethernet has been a bottleneck on client PCs, IF you tried to utilize it.

    What will increasing ethernet bandwidths do for clients? They won't need to have non volatile storage, they can all boot off a network as thin clients and they can become even leaner, your "CPU" can be another kind of "central" processing unit, moved to the server for the most intensive calculations.

    Some might say "we're not there yet", which is true - because we didn't have the ethernet bandwidth yet. at. affordable. prices.

  28. wpk

    Yes

    Yes, we need affordable commodity 100GB networking products. Their impact on the design of next gen distributed systems architectures can not be understated. Faster, faster, quicker, quicker, cheaper, cheaper.

  29. Anonymous Coward
    Paris Hilton

    Speed it up...

    "Hurry it up, could ya sailor? We're not getting any younger."

    (Paris - Enough said.)

  30. John Angelico

    Strewth, you youngsters haven't seen anything

    10Mb Ethernet? 2Mb coax? 1200 baud?

    Some of us veterans remember 300-baud acoustic couplers and RS-232C comms. They were around when the Dead Sea wasn't even crook, and they went out so far back that Wikipedia may not have an entry.

    1. Steve X
      Coat

      Youngsters?

      Huh. I'll see your 300-baud accoustic coupler and raise you an ASR33 with tape reader (paper tape, that is). On George III.

      1. Mike Pellatt
        Facepalm

        You owe me now.

        Now you've really brought back bad memories. Even worse than the thinnet unreliability.

        OK, it was George IV, not George III.

        1972, first ever programming course (Algol, Kingston Poly, summer hols at end of Lower VIth)

        The ASR33 readers were fucked. Well, poorly maintained really. So if you missed a typo and therefore couldn't backspace and delete the error, and had to "copy" the tape up to the typo, the errors that the reader introduced led to getting a correct tape being a divergent process. Unless you found one of the 2 ASRs that were working OK-ish.

        What with the overnight turnaround, I think all I got working in a week was the prime numbers up to 99......

  31. Anonymous Coward
    FAIL

    @ 10G territory?

    "Say what? I don't know what's going on in Europe, but here in the states, 10G is in the labs only!"

    Wow - you're that far behind current technology in the US?

    I spend many of my working days setting up long haul DWDM systems that do multiple (up to 96) 10G channels on a single fibre pair - that's fairly common and unremarkable.

    Google "1830PSS" and learn - up to 88 x 100G channels per fibre pair at 50GhZ channel spacing.

  32. alwarming
    Angel

    Blessing in disguise.

    During the cusp, when data centers start hitting 90% bandwidth saturation mark on a frequent basis, the customers start complaining to vendors... and that's when vendors work on various solutions in software and algorithms to improve the utilization of the pipe. Some of these solutions are temporary in nature but overall it improves the quality of software stack.

  33. Anonymous Coward
    WTF?

    Very enlightening.

    Some very knowledgable people in here and also some woefully badly informed people relying on rumour, misunderstood half truths and plain old bullshit, if we're all working in IT no wonder IT often has a bad rep with other workplace staff.

  34. E 2

    Yes

    Yes, I want 100Gb Ethernet. I want to wired my house with it.

  35. Anonymous Coward
    Facepalm

    You are not getting it.

    "having to get all your software from the 'cloud' is going to be tedious with anything slower."

    Doh - well remember to convince your ISP to upgrade your Internet connection then - remember most people are on about 5Mbps now - let alone 1000Mbps+

    The iCloud is used to 'sync' with - you do not 'stream' the data from it when you want to listen to some music.

  36. Aitor 1
    Pirate

    100Mb? ok.

    ¿100GB?

    It really depends on the type server I am using, workload, etc. I am usually quite happy with a couple of 1 GB connections

    for each node.

    If you are delivering VoD, on the other side, you will need 40 Gb connectivity per server.

    As for clients, most people won't notice using 100Mb or 100Gb.. for them it is OK, and even for me, it's almost the

    same, as long as the network is correctly setup, and I don't have too much "noise".

    A 10GB ethernet connection has more bandwidth than a HDD IRL, and if you are using a CLIENT thay way, you have a problem,

    that client is SERVER, or you like to make network backup of clients. If you do that, why not doing it at night?

    In 10 hours, at 80% cap. a 100Mbit port can transmit some 35GB of data, more than enough if you are doing differential backups

    (as you should), and, for what matters, you sould be using a central document repository..

    So no, it is not that I don't want it (of course I do), it is just that it is not really needed in an office.

    Note: at home I use 1Gb and would like to have 10Gb, but only because I have tons of hidef movies.. and this should not be the norm in an office!!

    Note2: I do remember 10Mb. The problem wasn't those 10 Mb. The problem was (is) that Ethernet is no token ring: you have collisions unless you use

    structured ethernet, and for that you need switches (not hubs), and of course get rid of the pesky 10baseT cables.. and structured 10Mb Ethernet was perfectly

    ok for client/server software.. more than enough.

    The problems we have now with the netword deride from web pages. We have to download the very same content time and time again.. for having less functionality

    than "heavy" clients. That, and multimedia contentent.

    Ho!! pass me the rum and 100Gb ethernet!

This topic is closed for new posts.