back to article Microsoft stamps on HTTP 2.0's pedal, races to mobileville

The HTTP protocol - one of the web's foundation specifications - is getting a speed and security revamp. The Internet Engineering Task Force (IETF) is this week holding meetings on what it's calling HTTP 2.0, "a major new version" of the ubiquitous data transfer protocol. The changes apply to HTTPbis, the core of the …

COMMENTS

This topic is closed for new posts.
  1. Cave Dweller
    Trollface

    HTTP Speed+Mobility

    HTTP S&M? The internet wants it faster.

  2. Anonymous Coward
    Boffin

    How about containers?

    One way to speed up web site loading is to load as few files as possible - so what about adding an extension to HTTP and HTML to allow a web site to specify files within a container (e.g. a tarball, or a ZIP), and to allow the browser to fetch the whole container?

    So even on a site like the reg - you could have a container with all the static stuff (images like the boffin graphic on this post), a container with all JS and CSS (since that might change), and then the content. Like all files, your browser should only fetch the images and scripting containers if modified, so one fetch once, and then the content can work with them.

    1. Thomas 18
      Stop

      Re: How about containers?

      A lot of modern data formats are compressed about as much as is possible so zipping wouldn't save much space. Zipping text is great but how much bandwidth is text compared to images/video.

      As for downloading all the images a site is likely to give you so you don't have to make repeat requests seems like a lot of wasted bandwidth. Particularly for mobile where I want just the bear minimum to get the page to display (given the slow speed of my provider for one).

      1. Graham Bartlett

        Re: How about containers?

        I think he meant using zip/tar simply as a method for glomming a bunch of files together. Every request for a new file adds some additional traffic to manage that request, which isn't helping put your webpage up. If you can get a webpage with one request for a large-ish file, instead of a thousand requests for small files, you've saved yourself a decent chunk of data. Each request also takes some time to hit the server and come back, so it would speed things up too.

        1. Stuart Halliday
          Thumb Up

          Re: How about containers?

          But if you zip up a number of compressed files, then you can't really display that web page building up over time. The browser would have to wait until the last byte is downloaded before it could display the page?

          New protocol is the way to go.

    2. David Dawson
      Happy

      Re: How about containers?

      This is not too dissimilar to what actually happens, in effect.

      With HTTP pipelining and transparent compression, you end up with a single content stream that has been compressed.

      This is totally transparent to the user, website and client side developer tooling.

      Plus, it has the added advantage that files are available to render as soon as they have been downloaded.

    3. Ken Hagan Gold badge

      Re: How about containers?

      I think the low-hanging fruit are gone.

      HTTP is already designed to permit static content to be cached, so the "container" you are looking for already exists and is called a "file". If the prospect of lots of little files annoys you, there's another existing standard (mhtml) for storing multiple elements in a single file, that is supported to some degree by most browsers.

      I also think the high-level fruit are likely to remain unreachable.

      Sites that insist on base64-encoding the client's autobiography in the URL, or putting time-dependent trivia on every page, or offering personalised content to each visitor, are broken at the application-level (authorship) and cannot be fixed by changing the transport protocol.

      1. sabroni Silver badge

        Re: How about containers?

        Sites that offer personalised content to each visitor are broken at the application level? That's a pretty large chunk of the web that's broken....

        1. Ken Hagan Gold badge

          Re: How about containers?

          Less than you might think. Try surfing with cookies set to "bog off". You'll find that the non-personalised versions of most sites are fairly usable.

  3. Anonymous Coward
    FAIL

    Jean Paoli

    What the hell does someone who co-invented XML know about speed, efficiency or security? Good god almighty!

    1. Ken Hagan Gold badge

      Re: Jean Paoli

      If we are talking about XML's inventors, I think it is only fair to consider the context of the work.

      Compared to HTML, XML is a thing of beauty. Its regular structure makes it easier to parse. Its extensible structure means that this easiness ought to persist over a few generations. The fact that it has been abused more than Jimi Hendrix' guitar is no reflection on its original design.

  4. Ralph B
    Trollface

    New Request Methods?

    In addition to HEAD, GET, POST, PUT, etc., I guess we'll also have EMBRACE, EXTEND and EXTINGUISH.

  5. Vic

    Cue submarine patent wars in

    5...4...3..2..1...

    Vic.

  6. Gary F
    Megaphone

    About time. Now how about...

    SMTP is way overdue an overhaul. It's 30 years old, despite a few additions to the protocol made 4 years ago, is completely open to abuse and relies on after-thoughts such as Domain Keys and SPF in an attempt to check sender authenticity. Those and the SMTP extensions aren't mandatory for most servers to send, relay or receive email. It needs a complete overhaul to enforce sender authentication and other security/encryption features into the protocol itself with a kill date for SMTP v1.

    1. JanMeijer

      Re: About time. Now how about...

      Within the current SMTP protocol framework there no obstacles to deploy a global x.509 certificate authenticated MTA infrastructure using TLS. Implemented by most popluar MTAs. Allowing only legit (white-listed) MTAs into the infrastructure.

      There's the little practical detail of the current problems with SSL CAs, the slightly bigger problem of large scale SSL certificate deployments being stubbornly difficult (or the deployment of the PKI-understanding mutation throughout the human population taking longer than expected) but other than that, it's a piece of cake.

      What it'll give you though until large chunks of the Internet start refusing mail from other large chunks is authenticated spam. Happy happy happy, joy joy joy. Maybe if harbouring spam senders would slap sanctions or an invasion on a country might it help but even then I have my doubts.

      1. Ken Hagan Gold badge

        Re: Authenticated spam

        Divide and conquer.

        If I send all my authenticated mail through a relay (to whom I and lots of others pay a small fee, so it's a viable business model) and countersign it, *recipients* only have to whitelist the relay. (Recipients can complain to the relay people in the reasonable expectation that the relay people will pursue the matter rather than see their whitelisting threatened by a rogue customer.)

        For further simplication, relays can aggregate with other relays. Also, I (and they) may have deals with other relays at the same level of the hierarchy, to avoid a single point of failure.

This topic is closed for new posts.

Other stories you might like