back to article Bring back error correction, say Danish 'net boffins

Researchers from Denmark's Aalborg university are claiming that Internet could move traffic five times or more faster than it does today. The downside? Doing away with how TCP/IP currently functions. In this announcement, Aalborg professor Frank Fitzek provides a (somewhat sketchy) outline of what he calls “Random Linear …

  1. Christian Berger

    Well under some circumstances

    Yes on high latency connections this could bring a considerably improvement. However it would require a new protocol, kinda an TCPwR (Transmission Control Protocol with Redundancy).

    There are 2 Problems with this:

    1. It won't go through unmodified NAT.

    2. It can be hard to implement.

    The first problem is particularly bad with "carrier grade NAT" you commonly have on high latency mobile connections, or mid latency consumer connections.

    The second one is evident if you look at real life implementations of TCP/IP stacks. There will are ones, particularly in embedded systems still having severe problems. For example the Nucleus one just tends to drop connections without telling the application about it. Adding more complexity will cause lots of problems.

    Maybe one sensible way of doing it would be to extend TCP in some way so connections could easily fall back.

  2. Chris Miller

    Endless fun for researchers, but chances of being implemented in the real world roughly nil. If you doubt this, IPv6 offers significant performance enhancements over IPv4 (big packets, better routing, etc.) and many other benefits - how long has it taken us to implement it on 1% of Internet connections (answers in decades, please)?

    1. This post has been deleted by its author

    2. channel extended

      Try again.

      Perhaps this could be considered for IP8.

    3. Trevor_Pott Gold badge

      IPv6 isn't seeing rapid adoption mostly because the astounding arrogance of the people who created the protocol resulting in our getting a protocol that requires tearing up the entire internet to implement it. You need to buy at least one new everything and to implement it securely you need to often replace "one new everything" with "several". The cost of the transition is enormous for end users and SMBs and the ongoing costs are higher than IPv4.

      And all because whiny baby developers were so sad about having to ad a few extra lines of code to deal with NAT that they turned purging it from IPv6 into a religion. What the world wanted was IPv4 with a larger address space and a few under-the-hood enhancements. What we got was a clusterfuck designed to restrict how we can run our own networks and strip away any hope of privacy from the average job by making damned sure that an IP in fact DOES map to a person.

      Grand.

      If a new protocol showed up with concrete benefits that didn't require throwing out the baby with the bathwater it would be uptaken in short order. The problem underlying IPv6 is that, ultimately, we don't want to give up the good parts of IPv4 to get at the good parts of IPv6. We're being frogmarched towards it with a gun at our heads, but you can't expect that we're all that happy about it.

      1. Yes Me Silver badge

        @Trevor_Pott

        That's pretty wrong about several aspects of v6, but in particular:

        "strip away any hope of privacy from the average job by making damned sure that an IP in fact DOES map to a person."

        Not so. Firstly, the worst case is that it maps to a MAC address, but even that is going away with the widespread adoption of pseudo-random interface identifiers that change at a reasonable frequency. Secondly, most privacy breaches happen at application level anyway (that's this metadata stuff that Mr Snowden brought to our attention). The IP version is a detail.

        As for the other comments: yeah, we could have done a bit less engineering, but once you change the address length, you're incompatible anyway and most of the resulting transition problems would be just the same. Really.

        1. Trevor_Pott Gold badge

          Re: @Trevor_Pott

          Actually, yes, one address does map to a person since there's nothing like NAT to anonymise users. There aren't a hell of a lot of things NAT's good for, but the helping to hide exactly which individual behind the edge router is responsible for posting that dissident comment about the government is one of them.

          I never said IPv4 and NAT guaranteed privacy, just that they offered one layer that IPv6 doesn't.

      2. Tom Samplonius

        "IPv6 ... requires tearing up the entire internet to implement it."

        Provably wrong, since none of the backbone providers have done this. I'm turning up multiple backbone connections for a regional carrier now, all with simultaneous IPv4 + IPv6, and all backbone carriers are basically doing the same thing.

        "You need to buy at least one new everything and to implement it securely you need to often replace "one new everything" with "several"."

        Implement securely? What are you even talking about? Even if your 1999 firewall doesn't support IPv6, your 2014 firewall supports IPv4 and IPv6 simultaneously.

        1. Trevor_Pott Gold badge

          @Tom the backbone providers did have to tear up all their stuff to be IPv6 ready. But for the whole internet to be ready everyone has to. And they ask - quite rightly - "why should I?" It doesn't benefit them to do so.

          "Implement securely? What are you even talking about? Even if your 1999 firewall doesn't support IPv6, your 2014 firewall supports IPv4 and IPv6 simultaneously."

          Damned few consumer grade routers do. Oh, they might pass IPv6 packets, but now you've moved defense of the home network from a single device (the home router) to every. device on that network needing to be defended. Unless you have a really good (read: expensive) router/firewall and someone who knows how to use it.

          SMBs and the commercial midmarket are in worse shape: they have more diverse requirements than "open up a port so I can RDP into my home machine" or "push the VPN button so that I can VPN to work." Their costs are proportionately higher, as is the complexity they have to cope with, trying to now defend a network where every single node has a globally addressable IP address.

          I do, however, find it hilarious that you quote the bit where I said "in order to make IPv6 work you have to tear up the internet and replace it" and then go on to say both "that's a lie" and "it works just fine if you buy all new stuff". Great compartmentalization of thought there. Top class.

      3. Trygve Henriksen

        Trevor for the win!

        Yeah, I can't understand why they threw out NAT, either.

        Maybe they thought the only reason people used it was to allow more computers to use the same IP?

        (Which is kind of a neat, thing, really)

        But the most important reason is to hide your computers.

        I want as many layers of security between my computers and the big, bad internet as is possible.

        Why couldn't they just have added a couple of octets and said "jobs a done, now for a pint at the pub"

        1. Trevor_Pott Gold badge

          Re: Trevor for the win!

          NAT isn't security. But it is obscurity. And for many of us, that's very important.

          1. Trygve Henriksen

            Re: Trevor for the win!

            Sure it's security.

            A NATing firewall won't resend anything that it doesn't have a translation entry to.

            And only machines on the inside of the NAting firewall can set up those tables(done automatically when it accesses something on the outside. )

            Which means sending a 'reply' package won't get through to a machine on the inside unless it has actually sent an initial package outward first.

            It's a pretty effective filtering mechanism, really.

            (Unless the attacker can sniff outgoing packets, of course)

    4. Joseba4242

      Apologies for the late reply but I couldn't let this stand

      "IPv6 offers significant performance enhancements over IPv4 (big packets, better routing, etc.)"

      IPv6 packets aren't bigger, nor is its routing better.

      (replying to another comment), backbone providers did not make any equipment investment to enable IPv6. Backbone routers have been IPv6 capable since around 2000.

  3. John Smith 19 Gold badge
    Holmes

    On eof those "We can build a new internet that's X times faster if we scrap the old one first"

    proposals.

    But that 1st step is kind of a biggy.

    1. Trevor_Pott Gold badge

      Re: On eof those "We can build a new internet that's X times faster if we scrap the old one first"

      Don't see why. The existing internet's pretty shit. Full of monied interests and governments trying to remove civil liberties. Let's get a proper decentralized meshnet going with a brand new protocol and ditch the existing Internet, eh?

      1. Anonymous Coward
        Anonymous Coward

        Re: On eof those "We can build a new internet that's X times faster if we scrap the old one first"

        Like Freenet? Ever noticed how slow and inefficient it operates? Unfortunately, efficiency and anonymity conflict: focus on one and the other necessarily suffers. You have to decide where on the scale you want to be, and since it has to be uniform throughout, and different parties have different priorities, there will never be a consensus.

  4. Destroy All Monsters Silver badge
    Gimp

    In before "I have thought 10 seconds about it and it won't work because...."

    Oh wait.

  5. spegru

    FEC

    I like the sound of this - in principle. Forward Error Correction is already in use in core networks around the world - but not at Layer 2/3, at Layer 1.

    Whether it could be implemented at the higher layers though, with implications for all sorts of equipment & protocols, does sound like a big question.

    No reason NAT should be a problem though, if it's inside the carrier networks

    1. BlueGreen

      Re: FEC

      I am *really* not a network guy, so this is going to be a most n00bulous question, but I have long wondered if moving stuff into the application as a library, so using UDP + app level error correction + retransmit at the app level for dropped packets, might work better than TCP in some cases.

      Given wired connections are pretty reliable these days (I think) would that work? I guess the answer's no or it would be widespread already but any thoughts welcome.

      1. justincormack

        Re: FEC

        Lots of people have tried it and it rarely does. TCP is nicely tuned from years of experience.

        However you could use this new process without changing TCP - just change the stack not to ask for retransmissions if it can reconstruct the lost data.

      2. Pet Peeve

        Re: FEC

        Doubtful. UDP has plenty of use cases, but error-free connections aren't one of them. UDP is specifically for situations where packet dropping is desirable, such as in real time speech, where if you've dropped a packet, you have no desire to tie up the stream waiting for a replacement - you just fudge the sound and move on.

        TCP is really a masterpiece - packet dropping isn't an undesirable event, in fact it happens by design so that the transmission speed adapts automatically to the maximum sustainable throughput. If you actually wanted error correction, you'd probably want to do it at the transport layer anyway - a packet garbled in transmission would most likely have a bad TCP wrapper anyway and wouldn't be presented to the stack to get corrected.

        You get these "the net would be better if" things every once in a while, and most of the time, they are about as useful to read as the umpteenth explanation of physics that proves Einstein and/or quantum mechanics wrong.

        Mostly, I just want freaking IPV6.

    2. Christian Berger

      Re: FEC

      Well the idea is that you use FEC spread across packets so you don't have to re-transmit a lost packet. So you spread the Information of 3 packets into 4 and can live with one in 4 being lost.

      Lost packets can happen even with strong FEC with wireless connections.

      1. oldcoder

        Re: FEC

        Thus raising the actual traffic by 25%...

        and thus 25% more congestion...

        Which is more than what the current congestion is.

        1. Anonymous Coward
          Anonymous Coward

          Re: FEC

          But the article is claiming a good chunk of the congestion are all the "Come again?" packets going back AND the retries those packets generate. So say 1 in 3 packets garbles. You have to send a "Come again?" packet and wait for the reply, turning three packets into FIVE. And the more congested it gets, the more likely you get this scenario. Even worse, what if the "Come again?" packet itself gets messed up, meaning either another one is sent or the whole connection falls apart?

  6. Mage Silver badge

    DOCSIS etc

    Just implement FEC on the physical links and then TCP/IP won't resend as it won't get errors.

    Many fixed wireless, coax and satellite links do this already.

    1. Anonymous Coward
      Anonymous Coward

      Re: DOCSIS etc

      " implement FEC on the physical links and then TCP/IP won't resend as it won't get errors. Many fixed wireless, coax and satellite links do this already."

      As does your DSL modem/router when operating in interleaved mode.

      And as any l33t gamer knows, there's a tradeoff (any decent network or system designer knows about tradeoffs between latency and throughput too).

      The tradeoff here would be that adding significant quantities of FEC data and then spreading the FEC data across multiple packets/frames to increase robustness against occasional data loss increases the latency. Not necessarily by a disastrous amount, but some still don't like it.

      [It can only increase protection against *occasional* data loss; if you had to protect against significant continuous data loss you'd end up almost doubling the data volume because of the extra FEC data]

  7. hammarbtyp

    (T)rouble, (C)onsistent (P)ain protocol

    When we are asked to design a new industrial protocols we have a simple flowchart.

    Should I use TCP->No->But What if..->O.M.G No

    TCP benefits increasingly decreases as networks become better. As the internet infrastructure has improved, the benefits of TCP has greatly been outweighed by it's disadvantages For example in a client-server situation is that there is no simple way to implement multicast so as you fan out the load on your servers become greater.

    Another issue with TCP is that the standard has grown to a point where it is very difficult to reliable implement. I would guess 80% of our network issues on our projects are caused by TCP especially after a connection glitch

    I also get weary of people telling me that TCP is safer because it guarantees packet delivery like it is some sort of magic like it does quantum tunneling over networks. It doesn't, and often it attempts to do so causes more issues than it solves. Generally you are often better off implementing your own bespoke protocol on top of UDP.

    1. Pet Peeve

      Re: (T)rouble, (C)onsistent (P)ain protocol

      What in hades are you talking about? None of that makes any sense at all.

      1. hammarbtyp

        Re: (T)rouble, (C)onsistent (P)ain protocol

        Since my comment has been so thoroughly abused, maybe I need to expand my thoughts. Possibly those who downvoted me have never had to debug a TCP stack at 2 in the morning, or maybe I have just lost it and this is the 1st sign on oncoming senility. Read on and judge for yourself!!.

        My main interest is industrial networks. Now this is a bit of a tangent to the main article, but I believe there are parallels to be drawn. One of the more common protocols used is one called Modbus/TCP which sends real time packet data via a TCP protocol. In the past we have had many problems with this protocol generally because of TCP/IP queuing algorithm. On a network failure, the protocol backs off until reconnection and then you get a whole mass of data. Problem is the data is real time, so by the time it is delivered it is already out of date so is basically useless. So a lot of effort and resources are used to transmit data which is no longer required.

        IP already has a solution for that and that is UDP(and no, it does not stand for unreliable data protocol). If we use UDP then on connection fail the old data would simple be discarded. So why did the designers of Modbus choose TCP? The truth is TCP has become a blunt instrument for all network problems (resulting in making the protocol remarkable complex ). If you ask why, people will tell you that it guarantees packet delivery, ignoring the knock on effects that the reliability method cause.

        TCP is a great method of transferring data in some situations. For example when streaming like a film or music, where packet delivery order is critical. But the benefits are far less when you are sending data of limited length and when the data has limited life spans (such as AJAX data) it makes no sense at all.

        The original article suggest that TCP/IP is a significant bottleneck in internet efficiency. I agree, however I would counter that it is partly due to TCP/IP being mis-used to send data which does not require the reliability and congestion capability. If we start moving to Internet of things then we will have a large number of devices send small packets of real-time data. In this situation TCP is a bad choice to transport that data.

        I will go and take my medication now :)

  8. Warm Braw

    Layers, layers, layers

    TCP(/IP) didn't "do away" with error correction, it makes certain assumptions about the reliability of the lower layers.

    It is a fundamental assumption that the data link layer does not corrupt data. For point-to-point links, protocols such as SDLC, HDLC, DDCMP, etc. achieve this guarantee using CRCs and retransmission (as does Ethernet, though it just drops dud packets) though it would be entirely transparent to TCP if a Reed-Solomon code were used instead to enable more data to be corrected without retransmission. There will always be the case where data is temporarily uncorrectable, though links are much more reliable these days than they were.

    TCP employs retransmission not because of lower-layer errors, but to deal with congestion which results in entire packets being dropped when they are arriving faster than they can be transmitted. You can think of potentially better feedback mechanisms to control congestion, but if you're dealing with real-time streaming there's no difference between "late" and "lost".

    And the infamous TCP/IP header checksums are there to deal with memory corruption in the routers themselves, not corruption in transit.

    Anyway, as far as I can tell, despite Aalborg's dreadful write-up of their own technology, this isn't about error-correction at all, it's about optimising throughput in networks to avoid congestion...

    See: http://en.wikipedia.org/wiki/Linear_network_coding

  9. Tom 38

    PAR2 sets for the wire? Cool beans.

  10. Terry Barnes

    My bold prediction is that we will see some return to circuit switching to run alongside packet switching in our public networks. That would most likely take the form of an overlay network and traffic would be split out at edge routers. For some types of traffic, for some usage and routing scenarios, packet switching is awfully inefficient and difficult. At some point it becomes easier to just circuit switch that traffic than it does to try and engineer an illusion of circuit switching over a packet switched network.

    1. Down not across

      My bold prediction is that we will see some return to circuit switching to run alongside packet switching in our public networks. That would most likely take the form of an overlay network and traffic would be split out at edge routers. For some types of traffic, for some usage and routing scenarios, packet switching is awfully inefficient and difficult. At some point it becomes easier to just circuit switch that traffic than it does to try and engineer an illusion of circuit switching over a packet switched network.

      You mean like MPLS that many (if not most) carriers and ISPs use these days?

      1. Terry Barnes

        No. MPLS is a method of managing traffic priority across a managed network. It works well where you can make a reasonable prediction about the expected volumes of different traffic types in a private or virtually private network and dimension network elements accordingly. It couldn't work in a public network because users have a strong incentive to game the system by marking all packets as being of the highest priority. You end up back at square one.

        1. Jellied Eel Silver badge

          Beware of heresies

          What's needed is a network that's designed to deal with a mix of voice, video, other real-time apps and some flavors of data. It should be able to offer 'permanent' paths for defined endpoints like Ethernet pseudo-wires or VPNs, and temporary paths for things like bulk data transfers. It should offer some form of congestion control and management and be dynamically reconfigurable. To some, this will sound like SDN. To others, perhaps older and more cynical, ATM. The Internet. Re-inventing the wheel since at least 1876..

        2. Anonymous Coward
          Anonymous Coward

          MPLS

          No, MPLS is label switching. You are describing something more high level perhaps MPLS TE?

  11. Panicnow
    Thumb Down

    Optimise the network, not fiddle with TCP

    It is either the routing protocols, or weak links in the network.

    (Trivia :- BTW the P in TCP stands for Program not Protocol, check the RFC!)

  12. Suricou Raven

    Bah.

    Content addressible networking, please! That's what we really need. A distributed shared storage system to offload all those bulky images and media files to, with conventional packet switching as a fallback and for low-latency things.

    1. Roo
      Windows

      Re: Bah.

      Splitter !

  13. Scuby

    Silver Peak

    Silver Peak actually have a virtual appliance that does this kind of thing for replication traffic.

  14. Harry Kiri

    And have you met Mr XTP?

    TCP is built on a bunch of assumptions that are not always relevant, but for the most are. It assumes that packet drops are due to congestion so throttles back to avoid pouring petrol on a congestion fire. If you dont have a throttling mechanism it can go faster. But you can also trash the network. So you need a throttling mechanism. And it probably wouldnt end up much different.

    If you want you could use XTP and setup your own decide-when-retransmission is necessary from your own link statistics. Then you can decide if you include FEC on the data and undertake selective retransmission.

    FEC is a waste of time, bw and cost if you have a great SNR.

    FEC can be a waste of time and doesnt fix things if the SNR is higher than expected.

    1. bazza Silver badge

      Re: And have you met Mr XTP?

      @Harry Kiri,

      "FEC is a waste of time, bw and cost if you have a great SNR.

      FEC can be a waste of time and doesnt fix things if the SNR is higher than expected."

      I suggest you learn something about communications theory. You can never, ever, eliminate noise generated bit errors in a system by increasing SNR. And that clever chap Mandlebrot showed us that it doesn't really make sense to talk about an average bit error rate either.

      No matter how good your SNR is you have to have a way of dealing with error. Parity checking with retransmission is one way, FEC is another, etc. Even then you're only improving the chances of correct operation, not guaranteeing it.

  15. John Smith 19 Gold badge
    Unhappy

    OH and BTW it's pattented.

    So fork over the cash first.

  16. Dave Bell

    A Prediction

    You seem to be covering everything.

    My prediction: you'll see this math in a proprietary video streaming service using UDP packets.

    (I don't think it is tech yet. One of those little boxes plugged into your TV, something new, rather than replacing a huge installed base, that's when it will be tech.)

    1. Anonymous Coward
      Anonymous Coward

      Re: A Prediction

      Perhaps on a nVidia Tegra K1 for the first cut?

  17. Tom Samplonius

    Already patented...

    Forward Error Correction (FEC) on Ethernet is already patented:

    http://www.google.com/patents/US7343540

    Implementing FEC in addition to TCP retransmits makes the most sense. TCP retransmits are the hammer approach. If packets are dropped, data transfer is really slow, but it won't fail. Adding FEC to the layer just reduces the number of errors appearing in Layer 3. And Layer 3 errors will just be gross errors (ex. someone unplugged the cable for a few seconds), which no FEC algorithm will be able to repair anyways.

  18. Justthefacts Silver badge

    Oh noes, please don't do that

    Packet loss on an IP network is NOT a bug, it is a feature.

    TCP uses it to feedback to prevent network congestion and fail.

    Wide deployment of THIS would be a network catastrophe.

    Look at it this way: packet dropping can still be high enough to overwhelm any FEC, so you'd still want to put TCP on top. Now, what is going to stop this connection winding up and saturating the network, given that is seeing zero packet loss at TCP layer.

    A few normal packet drops (like 1%) won't stop it. The routers will still have to packet drop, because their queue will fill; they will drop arbitrarily across millions of IP flows, but this is the only one that won't back off.

    It will fill any input link it is given; all other input links will be full of packets from other flows that will back off at the server end.Fair queuing policy at the router can't save you. the data is network coded, so it flows across multiple paths simultaneously - flooding in a root and branch.

    Eeach node running it, gets to use its 10 Mbps / 100 Mbps / T1 connection effectively uncontested, flowing end to end across internet. Everyone else running POTS TCP gets squeezed into what's left.

    Cue 6 months of chaos, and poor Internet performance for everyone but those in the know. Then, everyone transitions in the arms race. What happens next?

    Still floods packets,y up to the point where the router packet loss increases from 1% sufficient to back off TCP, to 10% or 30% or whatever finally overwhelms this network FEC. Finally, TCP layered over the top backs off and contends / arbitrates as intended. Everyone is back to square one, except that the IP routers of the world are busy most of their time dropping packets to drill through the FEC.

    Classic tragedy of the commons

    1. eldakka

      Re: Oh noes, please don't do that

      warning: IANANE (I Am Not A Network Engineer)

      But do we have to rely on the loss of TCP packets to tell us this at the end-to-end level? Couldn't a router send back to everyone who's swamping it a 'back-off' message from the router? I thought there was already provision for this, but it's rarely, if ever, used?

      In the early days of of the internet when router processor capability was low, and bridging was more common than routing due to not being able to produce sufficiently intelligent silicon for routers at appropriate cost points, TCP-retransmission may have made sense as a congestion management mechanism. For the TCP layer at the receiver end to keep sending re-transmits, thus implicitly telling the sender to back-off due to the number of lost packets.

      However these days where routers have, compared to their early predecessors, massive processing capability, either what was not so long ago server grade CPUs or efficient lightning fast ASICs, can't the router's tell senders to back-the-FEC-off rather than relying on the receiver losing packets and telling the sender? In this case (where routers actually tell senders to shut-the-FEC-up) FEC may make more sense.

      I could see a case for adaptive choosing also. Low error-rates, TCP might make sense as retransmission packets are rare, and the FEC overhead (more data to contain the ECC) might perform worse. If there are slightly higher error-rates, and routers are smart enough to tell a sender swamping them to back off, FEC may make more sense, a little bit more data for the ECC, but less than the extra data TCP retransmissions would cause.

      If there are high error rates, then maybe another change? Maybe neither TCP or FEC?

      1. Anonymous Coward
        Anonymous Coward

        Re: Oh noes, please don't do that

        HaHa...

        He said "shut the FeC up!!!"

        Sorry. That cracked me up. Thanks eldakka!

        And the spelling reminds me of Father Ted.

      2. James 100

        Re: Oh noes, please don't do that

        "But do we have to rely on the loss of TCP packets to tell us this at the end-to-end level? Couldn't a router send back to everyone who's swamping it a 'back-off' message from the router? I thought there was already provision for this, but it's rarely, if ever, used?"

        There is - called ECN (Explicit Congestion Notification, RFC 3168). When supported, instead of dropping a packet, the router will set a flag in the packet which says "this packet would have been dropped because of congestion" - so TCP implementations supporting ECN know to slow down as if that packet had been lost, but without the need to re-send it.

        IMO, there's enough low-level FEC implemented already, and ECN gives these benefits without the need to replace a fundamental building block.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon