back to article Can a new TCP scheme give wireless a 16-fold boost?

A group of MIT researchers is touting a change to TCP – the transmission control protocol – that it says can yield sixteenfold and better improvements in performance in lossy networks. The claim, made by Muriel Médard’s Network Coding and Reliable Communications group at MIT, has been published in Technology Review. In this …

COMMENTS

This topic is closed for new posts.
  1. Henry Wertz 1 Gold badge

    Competing users....

    The number of competing users is key. If your throughput is dropping to like 1mbps, and you are the only user, then you have all this other bandwidth sitting there unused. If there's loads of others uers on the wifi, then this technique may just give you a higher share of the wifi compared to everyone else.

    Still, though, some improvement would be great. I can assure you that throughput truly turns to hell once packet loss gets above 1%. If something like this can fix that, that is fabulous.

  2. Tim Starling

    MAC retransmission

    Yes, wireless sucks when you have IP packet loss, that's what MAC retransmission is for. The IP packet is only lost after the MAC layer exceeds its retry limit. You could make a test like this look a lot more flattering by disabling MAC retransmission in the baseline case, so that's the methodology data that I would be asking for.

    1. MurielMedard

      Re: MAC retransmission

      Dear Tim,

      you make an excellent point. We have indeed done research by disabling MAC transmission. We have demonstrated the gains, which are quite significant, over a WiMax base station. We shall present a paper and give a demonstration by remotely using a WiMax base station in NJ in a couple of weeks at MACOM near Dublin:

      http://www.macom.ws/macom2012_program.html

      Muriel.

  3. Francis Boyle Silver badge

    Ms Médard

    That's Professor Médard. Or indeed, Dr Médard if you don't want to get too academic. Sorry, Richard, you're usually better than this.

    1. Anonymous Coward
      Anonymous Coward

      Re: Ms Médard

      Well that sure dispelled the myth of patronising academia.

    2. Michael Wojcik Silver badge

      Re: Ms Médard

      Dr Médard if you don't want to get too academic

      "Too academic"? Since "doctor" means "teacher" (from docere), and by extension "scholar", it's as academic a title as you can get.[1]

      That said: Dr Médard holds a PhD and lives in the US, where "Dr" is indeed the preferred[2] title. Some US academics do use "Professor" instead, but it's less common.

      And that said, the Register is edited in the UK, and often follows the editorial standards there, where academics are often cited as "Mr" or "Ms". So it becomes a question of calling someone by the title she and her immediate associates use, or the one preferred in the publication's (or author's) native country.

      And, of course, the Register uses a decidedly casual and cheeky style, where, among other things, people are referred to by all sorts of titles, nicknames, and the like, often in multiple ways in a single article. The researcher being cited in the Reg had best not be too thin-skinned.

      [1] And that's also why people-plumbers (MDs and the like) have no special claim on the title "doctor". Indeed, they're relative newcomers to it; back when medical types were as likely to hurt you as help you, some of them started calling themselves "doctors" in a bid for a bit of respectability. I have no argument with medical doctors (particularly researchers, but I'll be generous and extend it to non-researching clinicians) calling themselves that, but PhD's have a better claim to it than your neighborhood GP.

      [2] That is, most commonly used in general, and most prevalent among practitioners.

      1. Anonymous Coward
        Anonymous Coward

        Re: Ms Médard

        ""Too academic"? Since "doctor" means "teacher" (from docere), and by extension "scholar", it's as academic a title as you can get.[1]"

        Walk us through why "teacher" is synonymous with "scholar", and why someone with a PhD has a claim to be a "teacher"?

        1. MurielMedard

          Re: Ms Médard

          Dear all,

          I was amused by the light-hearted discussion around my title. The usual title I use is Professor. However, the vast majority of people, including the students in my group, simply call me Muriel.

  4. dssf

    Carrier Assent Required?

    Great, but will the carriers allow users to enjoy this without a price tier adjustment?

  5. This post has been deleted by its author

  6. Anonymous Coward
    Anonymous Coward

    Y'know, if my memory was at all reliable these days, I'd quote some references to papers on a similar theme from many many years ago.

    Fortunately there are things called "search engines" which sometimes help.

    One found me this:

    "When a wireless link forms a part of a network, the rate of packet loss due to link noise may be considerably higher than observed in a modern terrestrial network. This paper studies TCP performance over a range of link environments and highlights the advantage of recent modifications to TCP (e.g. SACK, New-Reno) for wireless communications. It also identifies two key issues which impact the performance of TCP over error prone links: TCP’s reliance on timers to recover from a failed retransmission cycle, and TCP’s inability to separate congestion packet loss from other types of packet loss. A solution to the first issue is identified and analysed by simulation, and the factors affecting the second issue are outlined."

    [Aberdeen University. 1998, apparently. Read that again: 1998. You know where to find it]

    (reposted without silly doublespacing.)

    1. Michael Wojcik Silver badge

      So perhaps you should look at Médard's papers...

      ... where you might find out how her work here is different from vague generalizations about packet loss reducing throughput.

      Yes, people are all generally aware that packet loss has a disproportionate effect on TCP throughput. That's not what's noteworthy here. The interesting material in the paper is the algebraic encoding scheme which lets the receiver recreate lost packets once sufficient information has been successfully been transmitted, rather than using TCP's simplistic up-through-N acknowledgement mechanism. I can't say how original it is, as I don't pay a ton of attention to experimental improvements to TCP these days, but it's far beyond the precis you cite.

      But why bother doing a bit of reading, when in the fine tradition of Reg commentardation you can simply post a flip dismissal?

  7. Sean O'Connor 1

    Maybe I'm missing the point on this, but isn't buffering of small packets done already to TCP packets with Nagle's algorithm http://en.wikipedia.org/wiki/Nagle's_algorithm? Is this just simply about upping the limit on what is deemed a "small packet"?

  8. Eddie Edwards
    Boffin

    I'll attempt to explain this

    "coding schemes propose that the transmitter buffer several packets, encode them, and send them as a single transmission"

    No they don't.

    Network coding schemes are more like a RAID approach where instead of sending (say) three independent packets you send (say) four, such that the three original packets can be reconstructed from any three of the four. This uses linear algebra - each original packet is a vector, and a random linear combination of them is also a vector. If you send aX + bY + cZ *plus* a,b,c, three times, you end up with a 3x3 matrix. You can tell if this matrix has a unique solution - if it does, you find X, Y and Z; if it doesn't, you wait for another row of the matrix to arrive. a,b,c are just bytes, so you only add a few bytes to the packet size.

    The MIT innovation applies to TCP/IP in particular. TCP/IP will throttle a link if packets are lost, assuming the links is congested (i.e. packet loss => the router is too busy). But on WiFi, packets can be lost at random owing to interference. So if you have a less-than-perfect WiFi signal, TCP/IP will throttle your link. To avoid this, the MIT guys want to give TCP/IP some indication that progress is still being made, even though packets are being lost from time to time.

    What the MIT guys are doing is saying, when we receive a packet, let's try to solve the matrix and find out how close we are to a solution. The matrix may leave 0 degrees of freedom in the solution (i.e. it's solved), 1 degree of freedom (we need one more linearly-independent combination of the packets), 2 degrees of freedom, etc. When the number of degrees of freedom is reduced, then it sends an acknowledgement that says "I am closer to receiving some information than I was before".

    During congestion, a bunch of packets will go away, and the link will NACK as it did before (maybe a bunch of packets are lost in a row, or fully one half don't arrive => link is congested). But if packets go away at random but not enough to break the RAID recovery process, the receiver can still happily send ACKs indicating that progress is being made, even though packets are being lost.

    And this causes these amazing 10x throughput effects.

    I'm no TCP/IP expert so I don't know exactly how this integrates with the stack, but that's the basic maths behind it anyway.

    1. Yes Me Silver badge
      Boffin

      Re: I'll attempt to explain this

      What they are doing appears to have several components, none of which are new ideas by any means:

      1. At the wireless layer, they suppress retransmission (ARQ). The reason this helps is that the whole process of retransmission on a spotty wireless link multiplies the average time to send a packet by a factor of several, and makes the round-trip time seen by TCP very jittery, which increases the TCP retransmission timeout considerably.

      2. To minimise retransmission at the transport layer, they insert an extra layer (christened layer 2.5) that incorporates forward error correction.

      3. There may also be a TCP "performance enhancing proxy" in there too, but that is hard to tell from the press release style material.

      1. Eddie Edwards

        Re: I'll attempt to explain this

        I presume this wireless-level retransmission is only done for TCP (not UDP)? So you could in theory write a protocol on top of UDP which incorporated this e.g. have a server on a PC and a client on a phone and have them both understand the new protocol, without modifying the underlying OS-level TCP implementation?

      2. P. Lee

        Re: I'll attempt to explain this

        Completely off-topic but perhaps useful for home users:

        A local proxy should increase performance for most internet web traffic because any retransmits come from your local proxy, not over your WAN link. Squid is easy, even on Windows.

        Also handy is a gig-ethernet cable, optical, if you run it next to power-lines.

        Note to Business: Proxies are also good at branch sites. Stop centralising them to three expensive bluecoats in the main data centre. These things should be cheap and side-ways scaling at all remote sites.

        /off topic rant

      3. Michael Wojcik Silver badge

        Re: I'll attempt to explain this

        To minimise retransmission at the transport layer, they insert an extra layer (christened layer 2.5) that incorporates forward error correction.

        Not exactly. As the followup in the article notes, it's forward erasure correction. The scheme, as outlined, applies to lost packets - not to corrupted ones. In that sense it's different from, say, group-coded error correction, where it's possible to correct for corruption that alters M of N bits.

        Note - I've only skimmed the paper, so I may have missed some place where they extend the scheme to cover data errors as well as loss. But using this sort of linear coding, I think - and I haven't tried crunching the numbers - that to get the effect they want, they have to restrict it to cases where they only have data loss, and they know they have data loss (courtesy of TCP sequence numbers). Then, if they assume the packets they did receive are correct, they can solve for the missing one.

        In this scheme, I believe you'd want to leave detecting transmission errors to a higher level, on the assumption that they're significantly less common than transmission losses. If you implemented full forward error correction on all traffic at the TCP level, the overhead would probably mean you had an overall loss of throughput.

  9. Why Fly
    Boffin

    Re: I'll attempt to explain this

    Turning off acknowledgements at the WiFi layer actually hurts performance, because this feedback is used to drive the selection of modulation parameters. When most frames make it through, the WiFi driver will choose a higher transmission speed, and when frames fail to be acknowledged, the PHY rate is reduced. A benefit (and a problem) with the lack of ACK feedback is that frames can be sent in a burst, one after each other, without having to perform the costly "listen for a random amount of time" collision avoidance.

    When we tried to use this approach we were finding bursts of lost frames. This meant that the erasure coding had to be over a large enough number of packets to able to recover from these burst losses. We found that the best performance was achieved with a small amount of WiFi retransmissions and when both the good WiFi frames and corrupted WiFi frames were fed in to the error correction (because there was still useful information in many of the corrupted frames).

    Alas the IEEE folks in 802.11 didn't see it that way and wouldn't let us add this "layer 2.5" feature to 802.11aa.

  10. Alan Brown Silver badge

    wifi is inherently unreliable

    One way of "solving" the problem is to run a VPN from your wireless to a server on the wireside, then make that the default route to the world.

    Works for me.

    The nice/fancy scheduling mechanisms work ok, but they generally need to be from wireside to client and very few hosts around the world implement anything other than Reno.

  11. Smelly Socks
    Facepalm

    what are you going to pessimise?

    For sure, encoding schemes like this work for individual connections, but my oh my they don't scale. If your wireless connection is dropping packets, particularly on 3g, then the underlying layers will be doing a mad scramble to try to deliver the packets reliably. 3G is particularly good (or bad depending on your perspective) at this and you can often end up with crazyass packet RTTs due to multi-second retransmission when it goes to massive lengths to get your data through.

    But at what cost? Bandwidth spectrum, that's what. You've already got the lower layers doing fancy retransmission stuff, and if you augment this with upper layer packet spray, you will end up wasting a huge amount of spectral bandwidth. This sort of nondeterministic thundering hoards problem will cause overall network performance to fall off a performance cliff and then everyone will be unhappy because everyone's multi-layer retransmissions will be fighting with everyone else's. IOW, total fail.

    The same happens on wifi, but to a lesser extent because it the MAC retransmission will time out sooner.

    Turns out there's no such thing as a free lunch. Who knew?

    -ss

  12. Dolapevich

    Who cares about wireless speed?

    Even when most of the drone marketing speech has been centered on speed, does anybody cares?

    Most of the wifi APs y know are iddle 99% of it's time.

  13. Daniel von Asmuth
    Facepalm

    performance boost

    I just patented a technique that will boost a network connection with 2 % packet loss to 20 % packet loss, without significant reductions in bandwidth or roundtrip times.

  14. John Sager

    Nice idea, but it won't happen

    Because it's end-to-end, just like TCP is, so it requires a large number of endpoints to change just because, probably, 1 link in the chain has high packet loss. What would be more useful is a network encoding protocol built into the wireless link layer protocol. I know it fits pretty naturally into TCP's windowed protocl, but that's not enough to swing it IMHO.

This topic is closed for new posts.