back to article HTTP/2 spec gets green light: Faster web or needless complexity?

The Internet Engineering Steering Group (IESG) has approved the specifications for HTTP/2 and HPACK (a compression format for HTTP/2 header fields), says working group chair Mark Nottingham. HTTP/2 is based on SPDY, a Google-developed protocol designed to speed up web page loading by compressing the content and reducing the …

  1. GreggS

    yeah but

    Does it load my grumpy cat videos any quicker?

    1. Anonymous Coward
      Anonymous Coward

      Re: yeah but

      So is that the new "does it run Doom/Linux" question? What a dreadful time we live in.

      1. Anonymous Coward
        Anonymous Coward

        Re: yeah but

        Oh please. Everyone knows that Crysis is the benchmark....

        1. Destroy All Monsters Silver badge

          Re: yeah but

          Quantitative Easing is the answer to everything. Even to bandwidth problems.

  2. Nate Amsden

    i can see myself

    using http/1.1 for the next decade. The bottleneck in my experience is in the apps, not in the network or protocols.

    I'm sure super optimized people like google etc that is not the case but it seems to be for most of the folks out there.

    1. bigtimehustler

      Re: i can see myself

      Well, as an end user you won't have the choice or know either way. If your using a modern browser, once a web server supports it, you get your content delivered the new way automatically.

      1. Anonymous Coward
        Anonymous Coward

        Re: i can see myself

        You may not even have a choice as a webmaster either. If your host decides to update their HTTP server to something that supports HTTP/2.0, it may happen without you even noticing.

    2. Roo
      Windows

      Re: i can see myself

      "The bottleneck in my experience is in the apps, not in the network or protocols."

      My experience differs - but my perspective is that of someone who has been paid money to write REST services. Here's some things where HTTP 2.x is likely to help me write more efficient REST based systems:

      1) It's not uncommon for REST services to run out of file-descriptors before they run out of compute & memory. HTTP 2.x will definitely help with this...

      2) Where compute power is at a premium you can burn hundreds of thousands of cycles processing the headers alone, which really kicks you in the nuts when you are trying to address thousands of small objects (this is the majority of cases in my experience)... HTTP 2.x's use of a binary format will improve this a little.

      3) head-of-line blocking really hurts performance in a lot of REST apps. The classic work-around is to address multiple addressable objects with one request, and then overload the HTTP reponse codes and bodies in weird and wonderful ways - which breaks the REST model. This very day some poor sod was forced to explain why they had decided to return a 200 when their now-non-REST request partially completed... HTTP 2.x's multiplexed connection will bin the head-of-line crap and allow us to spam the server and let it sort out the best order in which to process the requests - rather than slavishly processing them in FIFO order. People reckoned that out-of-order execution was a good thing for processors & hard drives - it's a win for REST services too.

      4) header compression. K, you don't give a stuff about network bandwidth because you can throw wider pipes at it as time goes by, but the thing is wider pipes don't automatically reduce latency. Reducing the amount of data you have to transfer & the amount of effort to parse that data *does* reduce latency. Another win for HTTP 2.x.

      I won't begrudge you looking at kitty pictures with HTTP/1.x, by the same token it would be nice if the wannabe luddites out there spared a thought for the poor sods (like myself) who are forced to use HTTP in high performance distributed apps. ;)

      Anyway, we'll see how HTTP 2x works out, hopefully my lack of cynicism will be rewarded just this once...

    3. streaky

      Re: i can see myself

      It's in the protocol more than the apps. It doesn't take long staring at the output of firebug to see the major speed issue with most sites - optimised or otherwise - is protocol related. Also not for nothing but you'll use it when your HTTPd supports it; the end, no questions. And that will be soon, very soon.

      1. Destroy All Monsters Silver badge

        Re: i can see myself

        @Roo : The real question is: Are you using Erlang?

        1. Roo
          Windows

          Re: i can see myself

          "@Roo : The real question is: Are you using Erlang?"

          No I am not using Erlang... It wouldn't make any difference if it were the right tool for the job because the people with the gold specify what tools their minions can/can't use - despite having zero knowledge, experience or interest in the mechanics of distributed apps. I offer my advice, and then I do what I'm asked to do. I write more efficient code more quickly than the Java & thread junkies, and I get paid for doing it, it keeps the family clothed and fed and it helps minimise the amount of time I waste giving unwanted advice on parallel/distributed apps. :)

    4. Charlie Clark Silver badge

      Re: i can see myself

      using http/1.1 for the next decade.

      Which is to suggest that there is nothing wrong with HTTP 1.1

      This is quite simply not true. However, as long as it was good enough nobody could be bothered touching it. Google worked on SPDY, submitted it to the IETF and agreed to changes even though SPDY was getting adoption as a proprietary protocol. I'm pretty sure the work isn't finished with 2.0 but it is important that standard's bodies are open to suggestions from outside.

      Kamp is not the only one involved: Mark Nottingham has been working with HTTP for a long time as have others on the committee. The best way to deal with any concerns is to do what Google did and come up with some working code and a relevant specification.

  3. Ian Michael Gumby

    This says it all...

    "The reason HTTP/2.0 does not improve privacy is that the big corporate backers have built their business model on top of the lack of privacy. They are very upset about NSA spying on just about everybody in the entire world, but they do not want to do anything that prevents them from doing the same thing. The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption."

    So true.

    And if we did remove the cookies and put the onus of privacy back on the user, it would force companies to have an opt-in model. The larger issue is that the sucking sound would be all those digital marketing companies losing the bulk of their business.

    1. Anonymous Coward
      Anonymous Coward

      Re: This says it all...

      Look at the world wide web rampages companies go on to fight for copyrights to music and rounded corners. Now look for the same scale rampages from any company to fight for privacy or ANY other ethical concerns......doesn't exist. How about HTTPrivacy/2.no

      We need more Snow.

      1. Destroy All Monsters Silver badge

        Re: This says it all...

        WHERE ARE THE SNOWDENS OF YESTERYEAR?

        1. phil dude
          Black Helicopters

          Re: This says it all...

          An unmarked grave in the desert?

          P.

          1. Ian Michael Gumby

            Re: This says it all...

            "An unmarked grave in the desert?"

            No, if the rumors my friend heard were true, Snowden was FSB so that graveyard would be in Siberia, Not a salt flat. (Fixed the desert part for you.)

        2. Donkey Molestor X

          Re: This says it all...

          It doesn't matter where the Snowdens are. You saw what everybody did in response. They made some brief, loud disapproving noises and then slumped back into their Barca-Loungers and La-Z-Boys and shoved more episodes of Amurican Idull and Big Brother into their flaccid brain-holes.

          1. Ian Michael Gumby
            Big Brother

            Re: This says it all...

            "It doesn't matter where the Snowdens are. You saw what everybody did in response. They made some brief, loud disapproving noises and then slumped back into their Barca-Loungers and La-Z-Boys and shoved more episodes of Amurican Idull and Big Brother into their flaccid brain-holes."

            The irony is that Google, FB and others are doing what you have accused the US Government of doing. But you seem ok with it.

            So who really has the flaccid brain?

    2. Anonymous Coward
      Anonymous Coward

      Re: This says it all...

      So, how do you propose HTTP/2 should improve privacy?

    3. streaky

      Re: This says it all...

      Cookies aren't actually the big privacy issue, if this isn't clear to people by now we have serious problems. Not for nothing but removing cookies would put us back to mud huts regardless.

  4. ppawel
    Facepalm

    Yeah, let's switch to binary protocol and compress those headers...

    ...all for 15% gain in page load speed by Google web browser from Google servers.

    Meanwhile the real problems with HTTP will persist for decades to come thanks to this lame attempt at a major revision of the standard and probably will be much harder to tackle in the future because of the added complexity.

    1. Anonymous Coward
      Anonymous Coward

      Re: Yeah, let's switch to binary protocol and compress those headers...

      No. Adoption is already happening: https://github.com/http2/http2-spec/wiki/Implementations.

  5. Anonymous Coward
    Anonymous Coward

    Debugging

    It sounds like it will be a nightmare to tease out the reasons for a page loading slowly. Network protocol analysis has to handle out-of-sequence packets and various other real world distortions of the traffic stream content. If the data starts to become too structured then it will be hard to fill in the gaps. Too many protocol analysers in the past have relied on perfect laboratory conditions where they saw everything from the start of a connection - and in the right order.

    Multiple parallel connections, efficiently re-used for the next Get Request, tended to overcome network latency problems. Pipe-lining on one TCP connection is unlikely to overcome large round-trip latency.

    1. Anonymous Bullard

      Re: Debugging

      Pipe-lining on one TCP connection is unlikely to overcome large round-trip latency.

      Yes it is (and HTTP/2 is multiplexed rather than pipe-lined - requests can be in parallel)

      With HTTP/2 the server can pre-empt the next requests and push them.. "Here's index.html, and I know what you'll ask for next, so don't bother: here's style.css, favicon.ico, and funny-cat.jpg."

      Also, there is one TCP handshake (per server), and the headers are binary and compressed.

      1. Anonymous Coward
        Anonymous Coward

        Re: Debugging

        God help us if browsers are still speculatively requesting /favicon.ico because of some inherited braindead implementation of bookmark icons by Internet Exploder in the late 90's. For -every- -single- -request-. I'm looking at you, Firefox. Why, when I tell you that /favicon.ico does not exist, and will never exist, do you ask again a few seconds later, directly ignoring the 410 Permanently Removed status I returned? What do you think 410 is supposed to be used for, if not for a purpose exactly like that?

        1. Anonymous Coward
          Anonymous Coward

          Re: Debugging

          What about just returning a 200 with zero content? (an empty file)

          1. Anonymous Coward
            Anonymous Coward

            Re: Debugging

            "What about just returning a 200 with zero content?"

            The client would not be able to distinguish that from a file whose new version was now zero length.

            A 304 "Not Modified" would seem less ambiguous.

        2. streaky

          Re: Debugging

          "God help us if browsers are still speculatively requesting /favicon.ico"

          They are.

      2. P. Lee

        Re: Debugging

        >"Here's index.html, and I know what you'll ask for next, so don't bother: here's style.css, favicon.ico, and funny-cat.jpg."

        The server could read all the content tags in a page and serve them up from a single page request, but I don't really want that. That means my browser doesn't get the chance to filter out things it doesn't want, such as flash, and save a bit of bandwidth by not asking for it at all. I might have style.css already cached too.

        I don't want everything encrypted either. It isn't required. I like text rather than binary formats so I can see what's going on. I like it for the same reason I prefer text config file to the windows registry.

      3. Anonymous Coward
        Anonymous Coward

        Re: Debugging

        "HTTP/2 is multiplexed rather than pipe-lined"

        HTTP v2 allowed the client to send several Get Requests on the same TCP connection without waiting for their respective data to be returned. It was the client's responsibility to then work out where each request's data started and ended. That was called "pipe-lining".

        Has HTTP/2 introduced a different mechanism so that the returning stream has labelled fragments of data for different requests intermingled - rather than complete files nose-to-tail?

        1. Anonymous Bullard

          Re: Debugging

          Yes, it transfers in frames.

  6. Donkey Molestor X

    You guys are smoking moldy banana peels if you take anything Poul-Henning Kamp says seriously. Guy thinks he's the Yngwee Malmsteen of data structures just because he found (or rediscovered) a novel tree structure to use in Varnish to reduce CPU cache thrashing in ten year old architectures. He's just upset because nginx slaps varnish's balls around like a rabid howler monkey and he won't do anything about it. (Doubly hilarious because nginx isn't a biiig evul corporashun for sheeple either. They're just so much better than him at everything he's ever wanted to do.)

    To wit: https://bjornjohansen.no/caching-varnish-or-nginx

    From the donkey's mouth: https://www.varnish-cache.org/docs/trunk/phk/ssl.html

    1. Christian Berger

      "From the donkey's mouth: https://www.varnish-cache.org/docs/trunk/phk/ssl.html"

      Well actually what he was saying there in 2011 is that TLS libraries are in a really bad state. Heartbeat has shown to the world that this is actually the case. TLS libraries _are_ in fact in a very sorry state. So at least in this case he said something relatively insightful.

      Of course that doesn't say anything about his opinion on HTTP/2. You may not agree with it, but if HTTP/2 is in deed more complex than HTTP, then I have to agree with him, as complexity is a _serious_ problem in IT, and it _must_ be avoided wherever possible.

      1. Anonymous Coward
        Anonymous Coward

        But there's such a thing as necessary complexity. Like with prioritization systems versus a simple FIFO queue. IOW, sometimes it can't be avoided maybe because the status quo is complicating itself so that you're damned either way. That's what I seem to be reading about right now: that HTTP1.1 is getting long in the tooth for today's web use.

    2. Jamie Jones Silver badge

      "They're just so much better than him at everything he's ever wanted to do.)

      To wit: https://bjornjohansen.no/caching-varnish-or-nginx

      Hmmmm:

      502 Bad Gateway

      nginx/1.7.9

  7. Anonymous Coward
    Anonymous Coward

    What Kamp?

    As someone who's implemented SPDY and HTTP 2 drafts since the early days in custom web servers. HTTP/2 (and SPDY before it) isn't seriously not that complicated. It brings a very much welcomed speed boost for anybody developing websites by reducing TCP connection overhead via multiplexing, that in itself would be a welcomed addition even if none of the other features are included.

    I'm not sure what Kamp's proposal (or beef) is, but from the linked text, Kamp appears little more than speaking without giving any concrete suggestions to solutions in regards to the privacy problem, I question whether he even knows what he's talking about because:

    1) Cookies are simply HTTP header additions that aren't actually in the core specification designed to implement state in a stateless protocol.

    2) "client controlled session identifier" - means absolutely nothing. You have the option to remove/ignore cookies on the client end as it is. If he meant incorporating a state into the stateless protocol by allowing clients to initiate session id, then that's basically (a) not a good idea (why force a stateless protocol to become stateful?) and (b) not doable because of session id collision issues and it does little to solve "privacy" anyway, you still have states.

    So yeah, wtf Kamp?

    1. Christian Berger

      Re: What Kamp?

      Well one could argue that HTTP should have never allowed header additions like cookies, and no, controlling cookies on the client side by the user doesn't work as it's hard to do by the end user.

      Plus cookies have proven to be a bad way to introduce state to HTTP. It seems like most web developers get it wrong when they try.

      And that's one of the main problems with HTTP.

      One the one hand, we want it as a stateless "document browsing" protocol, where each web page load is separate and has nothing in common with the other ones.

      One the other hand we want web-applications where we want to keep a state persistent over a user session.

      Those are completely different use cases and I'd personally prefer if we split HTTP into 2 protocols, one for "object retrieval" and one for "web applications". Each one perhaps even with a different set of protocols on top of it. Websocket actually is a nice idea in that regard.

      1. Anonymous Coward
        Anonymous Coward

        Re: What Kamp?

        IOW, you'd rather we move all web apps into a dedicated protocol, sort of like a graphical terminal: VNC or something of the like.

        1. Christian Berger

          Re: What Kamp?

          "IOW, you'd rather we move all web apps into a dedicated protocol, sort of like a graphical terminal: VNC or something of the like."

          That would in deed be a very sensible thing to do, as it would also create a boundary between authentication, session handling and the rest of the application. Plus you could do amazing stuff with a minimal amount of code, and since "remote frame buffer" is a fairly well defined set of requirements, we could even support multiple protocols rather easily.

          1. Destroy All Monsters Silver badge
            Holmes

            The Nietzschean eternal rebirth

            That would in deed be a very sensible thing to do, as it would also create a boundary between authentication, session handling and the rest of the application. Plus you could do amazing stuff with a minimal amount of code, and since "remote frame buffer" is a fairly well defined set of requirements, we could even support multiple protocols rather easily.

            Will "X Terminals" be back? Yes. Yes, they will. Already, we have "remote desktops in the cloud".

            We should have stayed with the NeWS of the early 90s. Imagine all that JavaScript horror which could have been avoided (we are now at the maximum entropy NodeJS stage of course, which sounds like someone is implementing a real-world Monty Python sketch for confused people just out of uni, but so be it!)

      2. streaky

        Re: What Kamp?

        "proven to be a bad way to introduce state to HTTP"

        Cookies shouldn't be used to introduce state, they should be used to reference state - that's not the same thing. Because somebody uses something in a nefarious manner isn't automatic cause for ban - if it's *only* purpose was nefarious activity I'm sure you'd get agreement, but it isn't so you won't from anybody sensible. Until somebody comes up with a better idea than cookies (and they won't because the only reliable alternative is some sort of unique identifier like a cert) they're staying, the end.

      3. ppawel

        Re: What Kamp?

        And this is exactly one of the problems they should have tackled with HTTP 2 instead of focusing on wire protocol which is a minor improvement at a major (partly future) cost.

  8. Christian Berger

    Complexity leads to bugs which lead to security problems

    Seriously, we already had HTTP related bugs in both clients and servers which were security critical. And I'm not talking about PHP bugs or cross site scripting or anything application layer, actual HTTP bugs in software like curl.

    We live in an age where our computers rival living beings in complexity. We need to finally stop adding more and more complexity, particulary here where we replace a decent and already to complex protocol with one that's even more complex... for no sensible reason.

    I wonder if they at least fixed the problem of high latency connections hurting performance. That's a serious issue for the future. While we may get Gigabit connections to our homes, the world isn't going to shrink below 62 ms... even if packets would go the most direct surface route.

    1. Anonymous Coward
      Anonymous Coward

      Re: Complexity leads to bugs which lead to security problems

      neutrino routers

    2. Destroy All Monsters Silver badge
      Holmes

      Re: Complexity leads to bugs which lead to security problems

      We live in an age where our computers rival living beings in complexity.

      Not really.

      It's not like protocol handlers cannot be written with the proof they have been implemented according to specification (of course the spec may be wrong, but that's the way it goes). Protocols are never really complex in any meaningful sense.

      It's people faffing around with C and handwritten manglers like medieval hunchbacked differently-abled persons who are the real problem.

      Coding in Mercury? Hell yeah.

  9. DanceMan

    So this protocol aggregates all the connection requests together into one. Sounds like an end run around adblockers. Promoted by a company that lives on ad revenues.

    1. Anonymous Coward
      Anonymous Coward

      Wrong. It's one connection per server. Ads are normally served from a different server, and the ad blockers will still be blocking those requests.

      1. Anonymous Coward
        Anonymous Coward

        Not necessarily. One way to get around an ad blocker is to serve the ads from the same server as the content, making them part and parcel and forcing you into a take it or leave it situation.

        1. Anonymous Coward
          Anonymous Coward

          serve the ads from the same server as the content

          So why aren't they doing that already?

          1. Anonymous Coward
            Anonymous Coward

            Bit of an edge case since that means the webmaster lets the ad firm upload stuff to their server. Trust and security implications.

            1. beast666

              Not in my lifetime!

            2. Anonymous Coward
              Anonymous Coward

              Bit of an edge case since that means the webmaster lets the ad firm upload stuff to their server. Trust and security implications.

              Then have the webmaster proxy the data from the ad-firm's site?

              Have the webmaster download a package of ads to serve?

              There are technical solutions to this which do not compromise security.

          2. Charlie Clark Silver badge

            serve the ads from the same server as the content

            So why aren't they doing that already?

            Two reasons: data aggregation and latency. Data aggregation: if Doubleclick can track users across websites with its own cookies it's arguably in a better position to serve up better targeted adverts which helps drive up their price. Latency: Doubleclick will hold a realtime auction for the ads based on the personal data it thinks it has, this will be faster if it already has a dedicated slot in the page and can serve directly from its servers otherwise it has to arrange for the advert to be proxied by the original website.

            1. Charles 9

              But like I said the proxy offers the big advantage (especially these days) in that it's practically unblockable (block the ad, block the content). I mean, how many of us keep doubleclick on a NoScript Untrusted List or the like. I would think Doubleclick would take a delay if it meant actually getting through. I'm surprised there hasn't been this kind of proxy arranged already on a "must-provide" basis if the webmaster expects any kind of compensation.

        2. phuzz Silver badge
          Stop

          I'm not sure what adblockers you've all been using but AdBlock Plus is quite happy to block all content from website.com/ads while allowing content from website.com.

          Of course, they can mix up the adverts with legitimate images, but at that point why are you even visiting that website?

          1. Charles 9

            "Of course, they can mix up the adverts with legitimate images, but at that point why are you even visiting that website?"

            Because it's a niche website that serves exclusive content like old/obscure device drivers from companies that no longer exist or specific genres of media that are off the mainstream. This happens a lot more often than you think. Either that or the Ad-Blocker-Blockers that detect ABP and deny you access until you turn it off, TIOLI.

      2. Vector
        Joke

        Yeah, one connection per server. So that'll cut down on connection requests in modern monetized web pages by maybe 10%?

  10. Anonymous Coward
    Anonymous Coward

    Another trendy solution

    in need of a problem.

  11. Douchus McBagg

    trendy's...

    fired up the old SGI O2 and Amiga3000 last night to make sure they could boot, connect, and were happy on the network. least I'll have something running when the revolution comes.

    first against the wall etc. etc.

  12. Henry Wertz 1 Gold badge

    No reason for policy changes.

    I have to disagree with Poul-Henning Kamp to some extent. I can't argue with his statements about HTTP/2 breaking layering and so on -- I don't know if it does or not. I would think it'd certainly be harder to accelerate HTTP/2 than HTTP/1.1 due to the multiplexing and all that.

    But, as HTTP is a *transport* protocol, I think it is quite out of scope for HTTP/2 to require encryption, and also out of scope for it to drop support for cookies in favor of something else. I do favor increase in use of encryption, but as a updated transport protocol i simply don't see it as HTTP/2's place to force policy changes, and if it had I think it would have limited it's adoption considerably (after all, who want so to rewrite some third-party code that uses a cookie or two?)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon