back to article Meltdown ahoy!: Net king returns to save the interwebs

When you need to save the internet, who ya gonna call? Van Jacobson. Long before Facebook snared five million users, before Gmail revolutionized web email by stuffing inboxes with free storage, and just before Jim Clark and Marc Andreessen developed Netscape as the first commercial browser, the web couldn't cope. It was 1986 …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    A teeny tiny little conundrum.

    It's basically DRM writ large. That's one.

    For another, though I haven't read up on the details yet, it would be all too easy for various governments to start demanding you sign every single packet with your legal identity your government so graciously bestows upon you. Possibly using the RFID-enabled ID card law already requires you to have and carry with you everywhere in many a country, such as China, Russia, Germany, France, the Netherlands, Belgium, and so on. Even the US is making headway in this. So it's averted for now in the UK, but will that last?

    The problem, of course, is that people have multiple identities, or "faces". Your identity changes according to your environment; your relationship with your parents is different from the one with your friends or your colleagues, and much of the ruckus with facebook and the like --like when people get fired for blogging about a private pursuit-- is when work and play intermingle. With no way to separate those

    And, this being the internet, I'm not sure the DRM will last. Even when undeniably useful to everyone and not just the big media companies. And once broken all you have left to protect you is a lot of very expensive snake oil.

    I think it's very clear we need increasingly something. I'm not sure this fits that something. I'm even less sure the underlying assumptions are reasonable, desirable or tenable. There don't seem to be many people willing or able to seriously think about the issues in a large enough context, though. This, too, seems largely a technology-centered approach.

    Essentially they're "smarting up" the network so that it knows about chunks of content instead of passing packets (ip, udp, etc.) or streams (tcp, sctp) of bytes (excuse me, "octets", and yes I know why) back and forth, so that you can address those "objects". That's nice because it ought to give you greater control over the individual chunks, but that doesn't address the identity conundrum.

    It might make it more visible, so in another 25 years maybe it'll have grown large and unavoidable enough that we'll have to try and finally get it licked. Though I doubt the grey beards in this article will be around then. Who're we gonna call?

    Personally I'd just as soon address it right now, for I can see it coming already.

  2. phen

    As awesome as it is impractical

    How do they propose to solve the PKI problem? Sure, you and I run our own mini CA's, but my parents wouldn't have a clue.

    On top of that, my cellco charges by the meg. Am I going to be able to tell this protocol I can't afford to host youtube's most popular videos?

    TCP/IP works because almost everyone (techs included) can ignore its existence. This seems a little more intrusive.

  3. Blofeld's Cat
    Thumb Up

    Faceook?

    As in "Next page: The terror of Faceook"

    I like that description - I shall use it whenever possible.

    It conjurers up an image of an orang-utan hunched over the keyboard and baring his teeth

  4. TeeCee Gold badge
    WTF?

    "....but it will finally start becoming reality in 2011."

    Oh yeah? Pull the other one, it's got bells on.

    Just upgrading the "IP" part of TCP/IP to v6, an essential change to stop the whole shebang falling on its arse, has been dragging on interminably for years now. It still hasn't happened on any serious scale either.

    But uprooting the entire internet for a fundamental protocol change that should give better data security will start showing results in 2011? Let me know how that's going in a limited pilot.........some time around 2020 sounds about right if everyone takes it *really* seriously......

  5. Anonymous Coward
    Stop

    reinventing wheels

    So, we will have routers caching content locally, passing it only to those who have requested it, without needing to resend it from the source server each time.

    Very clever. Does "Multicast" ring any bells?

    It's been in the standards for 25 years (RFC966), and if the ISPs had actually implemented it we might not have these problems.

  6. Gordan

    DHT?

    This sounds suspiciously like a bare-metal implementation of a Distributed Hash Table network. The concept is hardly new, and there are a number of interesting implementations of it already, e.g. Entropy and Freenet. DHT is also increasingly used in more conventional P2P networks, such as BitTorrent and eDonkey/eMule.

    1. Woodgar

      re: DHT

      Agreed. Freenet was the first thing that came into my head as I was reading this.

  7. Peter Galbavy
    Thumb Up

    someone worth listening too

    I'd much rather listen to what VJ has to say over the self-promoting marketing folk like Cerf. The real 'net was built by a small handful of people and this guy is one of those. Notice how quiet most real engineers and inventors are ? It's a pleasant suprise to see him pop-up again quite so publicly.

  8. Anonymous Coward
    Pint

    Title

    Get that man a pint

  9. Jason Bloomberg Silver badge
    FAIL

    I don't get it

    Whenever anyone starts talking about improving the internet they seem to be pointing to multiple and distinct problems which appear to have little relationship with one another ...

    "Everybody wanted the same video, but the NBC router had no idea. It thought it was handling 6,000 different conversations not 6,000 requests for exactly the same piece of content"

    It was handling 6,000 different conversations, admittedly all the same. The solution was caching which didn't change the issue, just pushed the overload problem away from the central server. There were still 6,000 conversations all asking for the same thing. You can distribute the video far and wide but if 6,000 people want it you'll still have 6,000 conversations asking for it.

    The only thing that changes is where you tell them to get it from and who delivers it. That can all be handled by TCP/IP without any change. TCP/IP is simply one layer of the equation and higher level protocols deal with what's wanted, where it is and arranging for delivery.

    To say TCP/IP is fundamentally broken, we need to rip it up and do something different, seems entirely flawed to me and will be a tough sell.

    1. Rich 27

      think you nailed it

      I was thinking the exact same thing.

      1. Al Jones

        When all you have is a hammer everthing looks like a nail

        Van Johnson is trying to get beyond "I can make TCP/IP do that" and asking "if you were starting from scratch, knowing what you know after 40 years of TCP/IP, what would you do differently?".

        He's saying that there are better ways to solving these types of problems rather than just bolting on yet another tweak in TCP/IP.

        Just because your swiss-army knife has a dozen functions doesn't mean that you can't do the job better by using a tool that was designed for the job.

  10. Lui-g

    That's just not going to work...

    "That download could be on a local router or the smartphone of the guy sitting next to you on a plane."

    I agree with the concept, but only down to an ISP level - not a device level.

    Otherwise we'd all become content peers for anything we've seen... and I can see a number of problems with that:

    Bandwidth (your device becoems a server for all other people around you who want the same content)

    Privacy - since it will be designed to be shared out, it will undoubtedly be possible to see what content you've seen - more easily and discreetly than ever before (since the network activity would not be suspicious in a peer environment).

    We definitely need a solution, but it should be far easier for ISPs to cache popular content (which they must do already) than to change every device on the planet so they can become servers too.

    One day, when devices have battery life in months or years, and bandwidth isn'a an issue, then sure. But I think it may be a while yet. Maybe I'm not looking far enough in the big picture...

  11. Anonymous Coward
    Stop

    Web, Internet

    The web is not the same as the internet, you can't use the terms interchangeably!

    Internet: A set of interconnected computer networks

    Web: An application that runs on top of the internet, using HTTP

    1. Anonymous Coward
      Anonymous Coward

      re: Web, Internet

      This is the Reg, so I think most of us are aware of the difference. While the problem is one of Internet bandwidth, it's being driven by a proliferation of rich Web content, so in this context it might be acceptable to use the terms interchangeably.

      1. Olof P
        Dead Vulture

        But saying that in 1986 the Web was caving in under it's own weight is plainly false

        Since the Web wasn't invented until ca 1990

  12. Anonymous Coward
    Anonymous Coward

    800 billion nodes?

    Erm, that'll be 800 million methinks.

    1. Tony S

      ditto

      I was about to make the same comment. The chart used doesn't show a scale multiplier, so that should mean it is just as it appears - 800,000,000 = eight hundred million.

  13. Anonymous Coward
    Coat

    Dumping costs onto individuals?

    There will have to be some fundamental changes to data charging in the UK before this is workable - especially for those captive to Btw based services where the slowest speeds and lowest caps are matched by the highest prices - If we are in effect serving content, that will come from our data allowance, which will price usage out of a lot of peoples pockets in Market 1 areas where there will be much less control over bandwidth usage under this model as it looks from my quick skim

    All too often now we are seeing policy dictated by the assumption that people have generous data and decent speeds - a dangerous idea in the UK's fractured and cherry picked market where the biggest brake on progress is BTw acting like a robber baron of old charging the most for the least in a complete reversal of the normal order.

    Coat - thats BTw back for even more money for the last gen connection (x2 in the case of FTTx)

    1. Al Jones

      That's exactly why you can't fix this by tweaking TCP/IP

      If your ISP doesn't have to carry 6,000 copies of that video, because the copies that are already in it's network can be transparently re-distributed within the network, it's infrastructure costs can go down, so it doesn't need to charge you for your part in distributing the data.

      If you insist on trying to translate CCN into your own TCP/IP view of the world, you're missing the point.

  14. Anonymous Coward
    FAIL

    Transparent Proxies, Free Enterprise Fix ??

    So this guy thinks the intertubes are overloaded by people downloading youtube stuff and facebook images of popstars ?

    The generic fix is to rack up traffic costs. People start to think whether they *really* need that youtube vid as much as they think whether they need to have the light turned on all night in the bathroom. Here in Germany we pay 20 cent/kWh and turn off the light when not required.

    The second fix is to have transparent HTTP proxies at the DSL exchange and the national exchange levels. So if 100000 people from Germany want a specific youtube video from their kiddie-star, a server at DECIX in Frankfurt would serve it. No need to download from California. URLs are exactly Data Identifiers (Keys for the cache), as requested in the article. In major cities like Berlin, Muenchen, Duesseldorf, the youtube vid would sit in the DSL exchanges.

    A *single* fiber can nowadays easily carry 200Gbit (200*10^9 !!) PER SECOND. I can't see that the end of the nets is anywhere close. I can only see people doing stupid business models ("all you can eat") and that could create a financial problem. But certainly NOT a technoloy problem.

    1. Gordan

      Re: Transparent Proxies

      Transparent proxies are a good idea in theory, but the problem is that if you stick 100% to the spec, the amount of content that is cacheable is relatively small. I run a number of squid proxies and for general purpose browsing, due to the way most websites are put together (including this one!) the amount of properly cacheable content is typically less than 10%. Some large ISPs (<cough>Virgin Media</cough>) work around this by forcing a minimum caching period on content to try to reduce external bandwidth usage, but this ends up breaking things. Not to mention that a lot of sites are moving toward being fully SSL-ed, which is implicitly uncacheable (unless you are performing a man in the middle attack which requires all the client machines to have a trusted CA that issues fake certs to the proxy for whatever site is being accessed in near-real-time).

      Caching proxies are a good idea, but they are not as effective as you might hope. :-(

  15. pip25
    Pirate

    Already exists

    It's called Bittorrent. Although, without the privacy and DRM stuff, that's true. But it can be extended if someone's really interested...

  16. JimC

    I love all these experts here...

    I wish I was so smart that I could rubbish the work of someone of this calibre in a 30 second comment without making myself look foolish...

    1. Anonymous Coward
      Anonymous Coward

      Was thinking the same thing

      ...especially regarding the "[TCP/IP already does this]" comments. Just call it a hunch, but maybe... just maybe... this guy already knows what TCP/IP does and doesn't do seeing as he was one of its creators.

      I for one am interested in what he has to say.

  17. Steven Bloomfield
    Thumb Up

    This is the future

    I really got excited reading this article, CCN seems very sensible and a logical progression for the future of the Internet & WWW.

    I am pleased to see that the first implementations are through Android. Again a very intelligent way to implement CCN - on devices which adoption is growing the fastest. PCs & servers will then have to adopt CCN when the defacto way of accessing ‘The Net’ will be over CCN. Actually I don’t mean PCs I mean Workstations.

    The term PC seems somewhat outdated now. At home I use a laptop, my Android phone, xbox. Maybe Santa will get me a slate for Christmas!

    In this article I do think some of the examples were poor, as people have pointed out multi-casting / DHT deals with same content bottlenecks.

    I was also just reading the article "Berners-Lee: Facebook 'threatens' web future"

    http://www.theregister.co.uk/2010/11/20/berners_lee_says_facebook_a_thret_to_web/

    This also has the same underlying issues as CCN is addressing.

    Instead of entering your information repeatedly, you allow what site can see information.

    Just like installing an application on Android, you allow on a per website basis what information they can access from YOUR database.

    No need to re-enter the same information again and again. You can see who has access to your information and change access rights.

    I think this would be a great way to utilise CCN.

  18. Dan 55 Silver badge

    Tough sale

    Too many big companies monopolising people's data won't want to give it up. Youtube and Facebook simply wouldn't exist because each of their users would own their own videos or profiles. The best service they could offer would be a search engine which would be more or less like their competitors' video and social networking sites and have all the same videos.

    It rolls back the consumer part of the Internet, users become producers and would need the tools to publish from their ISP, who traditionally aren't keen on letting them do it because that would mean investing in their network and employing tech-savvy people.

    And there'll be a fight over the bandwidth and storage bills. The bean counters would ask why should one ISP cache data which originated on another.

    That's a lot of interests to topple.

  19. Dan Atkinson
    FAIL

    Web in 1986?

    C'mon. a technology news site talking about web in 1986? It should be a basic fact taught in schools that the web is not the Internet, and the two terms should not be interchangeable.

  20. dogged

    The mentality issue

    I think Van still instinctively thinks in terms of hubs and terminals. Now, with the growth in home NAS - effectively servers - this becomes more feasible as an outlook again.

    I've seen a few comments about becoming a pay-per-meg host and those are relevant, but consider for a second:

    If everyone hosts on their own "home server" with devices by default publishing content to that server, then infrastructure costs can be subsumed in broadband subs. Admittedly, this would require ISPs not to be jackasses (and IPv6 is not encouraging in this regard) but I think it can work. Distributed hosting as the future, why not?

  21. chris lively
    FAIL

    Not even close and certainly No cigar..

    "That download could be on a local router or the smartphone of the guy sitting next to you on a plane"

    Okay, so in essence hackers,foreign governments, etc, could reshape traffic themselves by requesting the initial page/video/whatever from the original server and then telling that server "Oh don't worry, we'll send that out for you!"

    Great. Man in the middle anyone?

    Somehow I fail to see how this provides more security. Sure it provides (or rather forces down the throat of everyone) the idea of data sharing.. But I don't think most people ever confused Peer to Peer with increased security.

    He had a great idea that worked out pretty well and for that I'm very thankful. But I think he may be a one hit wonder.

    1. Anonymous Coward
      Anonymous Coward

      did you

      Did you read the spec or are you just rambling?

  22. Peter Gathercole Silver badge

    He outlines the problem

    and the general solution, but not something that could work at the detail level, IMHO.

    The drawback of his solution is how you actually address the content that you want? If you look at the way P2P networks (which have been mentioned several times in the comments), you need to have some form of global directory service that can index content in some way so that it can be found.

    Just how on earth do you do this? Google is currently the best general way of identifying named content (for, say, torrents) and just look at how big the infrastructure for this is.

    If you are trying to de-dup the index (using an in-vogue term), you are going to have to have some form of hash, together with a fuzzy index of the human readable information to allow the content to be found. I'm sure it sounds interesting, but I cannot see it happening.

    Anyway, this could be added as a content rich application enhancement over IP anyway, using something like multicast, especially if IP6 is adopted.

  23. Jon Green
    Coat

    I won't believe it...

    ...until I hear it from the true inventor of the Internet, Al Gore.

  24. Daniel B.

    For an expert in "TCP/IP", he seems to forget stuff

    TCP/IP is usually known as "the Internet protocol", but there are a lot of things in the TCP/IP stack that are able to handle some of this stuff.

    Multicast, anyone? ISPs seem to filter out these things, even though a large chunk of IPv4 address space has been assigned to this. This was designed exactly for the purposes of one-to-many transmissions!

    Also, BitTorrent has also mostly solved the central-vs-distributed situation, most games actually use the protocol for updates these days.

  25. Kevin McMurtrie Silver badge
    FAIL

    I'm CEO, bitch

    Major web sites will never adopt this. Maybe Van Jacobson hasn't worked with a dot-com. They're all a bunch of data hoarding control freaks, offering "personalization" to differentiate themselves from all the others. Any attempts to shield your personal data will simply result in refusal service. Premium content is encrypted and watermarked per viewer, so it's not cacheable. Web pages are personalized and varied for marketing testing, so they're not cacheable. The 150 or so bandwidth sucking animations and images on each web page are for customer tracking (the payload is the request), so they're not cacheable. What remains for caching is a few static images and low value videos. It's not a lot and it's easily covered by existing edge cache services.

    The protocol might be useful for personal servers to replace clumsy Torrents. Locating data will still be a problem, though. Who will build an indexing service? Speaking of Torrents, Hollywood should be demanding tracing and takedown controls in the CCNx protocol soon.

  26. Anonymous Coward
    Anonymous Coward

    This is what happens when you privatize

    the companies in charge of the Internet, as has happened in the country in which I live. Any profit goes to paying huge salaries/bonus/stock incentive/golden parachutes for the upper management and a few bones for the stock holders. Very little goes back into the infrastructure and with the expected growth of usage the infrastructure, in many areas, will simply not be able to cope.

    Anyway, just another example of politicians selling out the people they should have been serving and corporate greed. Of course, there is also the issue of many not being able to afford to pay for Internet connectivity once these corporations have outsourced our jobs, possibly allowing us to keep our jobs if we concede to the 40 to 60 percent pay cuts that the same work pays at the outsourced companies.

  27. json

    TCP/IP succeeded because its relatively simple

    and requires relatively low spec'd devices like routers and managed switches.. things like caching and distributing content (ie pushing content to the edges) is already being implemented via CDN..e.g. Akamai and AWS.

  28. Anonymous Coward
    Thumb Down

    Start by getting the history correct.

    The problem described at the beginning of the article (the Internet congestion collapse) had been a known problem for a decade or more when it occurred in 1986. People had been working on it for that long. There had even been conferences on the topic as far back as 1979. But the Internet gurus didn't believe it and were caught flat footed. The "fledgling TCP/IP protocol" referred to in the article had been under development for 12 years at that point. It was far from "fledgling." In fact at the time, the same Internet gurus called it the most tested protocol around! (What the they were testing? Apparently not what mattered.)

    The scheme that Jacobson developed described here was a known stop-gap at the time. TCP was never intended to do congestion control and shouldn't. Any book on control theory will tell you to put this kind of thing as close to the problem as you can get it. Putting it in TCP puts as far from the problem as you can get. (Thus creating the greatest thesis generator ever: by putting it there, there is no real solution. So one can always come up with some tweak that is a little bit better in particular circumstances, write a thesis on it and get your degree, but never solves\ the problem, thereby allowing someone else to come up with another thesis topic.)

    It is a long story but this scheme is fundamentally flawed and flawed in such perverse way that replacing it is virtually impossible. So in a real sense the Internet is screwed. But that is only one of the fundamental flaws in the current architecture.

    As to the CCN proposal, one comment here already alluded to the problem when they said it was already done by BitTorrent. This is *not* a network architecture. It might be an application architecture, but it isn't a network architecture.

    Some years ago, Jacobson gave a talk on this at Google (it is on-line) at the beginning of the lecture he calls for a "Copernican Revolution" in networking. At the end of the talk he says to leave TCP/IP and below alone and paper over it. Some revolution!

This topic is closed for new posts.

Other stories you might like