back to article Boffins strap turbocharger to BitTorrent

Cue a new round of fast-network scare-mongering from the world's content owners: a group of information theorists from the US, France and Finland believe that with a bit of tweaking, P2P networks can become even more efficient. In fact, if their maths is correct – and their ideas could be deployed on a large scale – their …

COMMENTS

This topic is closed for new posts.
  1. Trevor_Pott Gold badge

    So they've discovered Google FS? I thought that was an off-the-shelf thing at this point. The only real difference is that a proper P2P network would have a distributed "name node" structure (think Nutanix here) instead of the single-point-of-failure so common to the earlier implementations of things like Hadoop.

    We're just talking about turning an ISP's last-mile network into a giagantic Hadoop cluster which then connects via a Fat Link to some other gigangtic Hadoop Cluster on some other ISP's last mile. (Well, not actually Hadoop, but you get the idea.)

    That's (maybe) okay if you are talking about a "mostly isolated network" like xDSL, but this would play merry hob with DOCSIS-based (cable) modems and infrastructure. Google Fibre as the base? Maybe. But when you're at a Google Fibre level of "to the premises," are you really putting much compute/storage/etc in the individual house? If you were sitting on a pipe the size of the Mississippi, then you'd be a perfect candidate for "as a service" streaming and storage of all your data. Once you've that kind of bandwidth, by $deity's sake, toss your non-unique data into the cloud and let someone else deal with the headache of managing and maintaining it.

    I am not against fundamental research, but this does seem as though it won't be a "peer to peer" network in the traditional sense. Interesting theoreticals on a CDN, though.

  2. Cipher
    FAIL

    WTF?

    "What the paper suggests is that if upload capacity is no longer a constraint,"

    A mighty big IF. And their "solution" is based on this.

    And they just figured out that distance has something to do with speed? They even claim to be the first to realize this? Clearly they have never actually seeded a freakin' thing in their ivory tower lives...

    Nonsense...

    1. Richard Chirgwin (Written by Reg staff)

      Re: WTF?

      Note that they're not talking about distance as in "metres of copper / fibre whatever". Distance in the context of the paper is the topology - how many nodes between you and I, for example.

      1. Cipher

        Re: WTF?

        Richard Chirgwin was helpful thusly:

        "Note that they're not talking about distance as in "metres of copper / fibre whatever". Distance in the context of the paper is the topology - how many nodes between you and I, for example."

        Doesn't more nodes generally equal greater physical distance? I cannot imagine that it doesn't.

        1. Michael Wojcik Silver badge

          Re: WTF?

          Doesn't more nodes generally equal greater physical distance? I cannot imagine that it doesn't.

          Imagine harder.

          I'm 18 hops from systems in the msu.edu domain, less than 15 miles from me. I'm 6 hops from a machine on our corporate network that's physically 3700 miles (geodesic - a bit shorter if you tunnel through the crust) away.

          I've seen corporate networks where there were more routers between machines in the same building than there were between most of those machines and some Internet sites.

          It's likely that the average geographically-distant sites will be a handful of hops further than the average local site. But the relationship is far from linear, and there are a lot of exceptions. So topological distance is not well-correlated to physical distance.

  3. Magani
    Joke

    Yippee!!

    NBN + New Improved P2P = High speed Pr0n for all[*]

    [*] Aged over 18, of course.

    1. Anonymous Coward
      Anonymous Coward

      Re: Yippee!!

      At 4K with teledildonics!

  4. jake Silver badge

    Sounds more like a positive displacement blower (Roots, Lysholm, whatever) ...

    ... than a turbocharger.

  5. Anonymous Coward
    Anonymous Coward

    Listen....

    You can hear the media executives children starving already.

  6. Killraven
    Boffin

    Education Please?

    Okay, this is far from my area of expertise, so I'd appreciate some correction to my confusion.

    As to the stated issue that (currently) the big issue on P2P speed was people's upload capacity; as I understand it, that's only a Bittorrent issue for torrents with few seeders. Isn't the basic premise of torrenting that the more active users of that torrent (both seeding & leeching) then the faster the torrent performance will be? After all, even if every single seeder only had a 256Kb upload speed, when the leecher connects to 1,000 of them they will, theoretically, max their download speeds anyway.

    So, as was my understanding, the only real limit to your torrent downloading speed is really how many peer connections your hardware and 'net connection can manage to handle at one time.

    What am I missing here? The info of the article seems to be a big "so what?" to me.

    1. neek
      Happy

      Re: Education Please?

      > even if every single seeder only had a 256Kb upload speed, when the leecher connects to 1,000 of them they will, theoretically, max their download speeds anyway

      I'm no expert either, and this paper assumes that upload speed is no longer the primary constraint, so this argument is moot in the context of the paper, but let me point out that your theory works for only one leecher. One leecher, connects to 1000 seeds, they all give him 256Kb upload speed, he's happy. But 100 leechers, all connecting to those 1000 seeds, they cannot give all 100 leechers their entire 256Kb upload speed. Using my Mickey Mouse math, you (one of the 100 leechers) would get 2.56Kb (256 / 100) from each so you'd get 2560K/s download rate in total. In real situations I doubt the figures work out as beneficially as you suggest, with more leechers or less seeds making your actual download rate not reach download capacity.

      This is probably why you often see a Peer in your connections list giving you very low rates.. they're already serving other nodes, or have their upload rate extremely crippled, meaning that your connection to them is achieving extremely little for you.

      Simply increasing the number of network connections your hardware can handle won't help the fact that the capacity of the nodes you're connecting to is being maxed out.

      The point of this article seems to be that if we remove the fact that nodes have limited upload rate, the next most important thing it to try to talk to nodes near you, so your 1Gb/s download comes from a node 2 hops away rather than 50 hops away on the other side of the planet. As has been said above, this is so obvious it's already well implemented in existing solutions and is nothing new.

      1. Killraven
        Thumb Up

        Re: Education Please?

        Righto. My was of thinking about it was not grokking that the nodes are getting bogged down due to massive hopping. I was thinking more of user upload rate, not node upload capacity.

        Thankie.

      2. Anonymous Coward
        Anonymous Coward

        Multicast...

        *IF* we could get multicast to work well on the Internet, and *IF* we could get Bittorrent set up to use multicasting efficiently, then we could see this sort of super-linear scaling, because a peer's upstream could be reused across all clients needing that data.

        Of course, we just

        a) need to make multicasting work on the greater Internet, not just local networks. How many ISPs correctly handle the needed messages to allow a customer to join or leave a multicast net?

        b) since multicasting doesn't work well for acknowledged packets (TCP), you need to use UDP, and thus you need to deal with dropped packets more gracefully. You'd need some form of drop-tolerant forward error correction to be applied.

        c) If your download speed is 4Mbit/sec, and the guy feeding the multicast is running 1Gbit/sec, you aren't going to get all the packets - you aren't going to get MOST of the packets. There would have to be a way to allow for that sort of asymmetry in the speeds (e.g. high speed sources would have to put out a large number of low speed multicasts).

        Maybe this could be the "killer app" to make Joe Bloggs give a Murinae rectum about IPv6.

  7. Sil

    Interesting but post people have uplinks 20x slower than their uplink & I don't see any sign that ISPs want to reverse the trend.

    They are perhaps thinking of the awesome free xxGB/s lines that link a few privileged universities.

    1. Khaptain Silver badge

      Most ISP probably don't want to reverse the trend but the first company that provides fast symetric at reasonable rates will force all the others ISPs to follow suit. This is where market competition really benefits us

      1. Colin Miller

        Will it? In my case I'm paying for 16Mb/s:1Mb/s but my ADSL modem reports that it synced at 13:1. I assume that's the limit of my phone line. If I move from ADSL to SDSL, will it be 7:7? And would most customers be happy with this?

  8. Dave 126 Silver badge

    >people have uplinks 20x slower than their uplink & I don't see any sign that ISPs want to reverse the trend

    They might be under more pressure to change if more consumers start using cloud services and off-site back up. I have a bog standard domestic connection, and it is a little boring sending modestly-sized files of my own creation (images, a few animations) to clients and collaborators.

This topic is closed for new posts.