I understand the need for revising HTTP, but I'm going to miss the plain text protocol.
Tiny steps: HTTP 2.0 WG looks for consensus
The long-awaited HTTP 2.0 protocol has inched a step further towards completion, with the IETF issuing a last call on the two key documents of the spec, with comments to end on 1 September. The two drafts in question are the core HTTP 2.0 document, and the HPACK header compression format. It's been a long road to get to HTTP …
COMMENTS
-
-
Tuesday 5th August 2014 09:50 GMT A Known Coward
Re: Mandatory encryption?
It's going to be interesting to see how that encryption works. TLS requires the use of trusted certificates, certificates that cost a hefty amount per year for an individual running a small two page website.
If HTTP 2.0 isn't going to create a two tier internet, one for the masses which provides no default protection against snooping (HTTP 1.0) and another for corporations which does (HTTP 2.0), then they'll also need to rethink the certificate system. At the very least making cheap ($1) certificates possible. Perhaps requiring them to be issued along with domain names as a complete package, your domain registrar issues a basic cert, they have all your details anyway and know you are the registered owner of the domain.
-
-
Tuesday 5th August 2014 13:13 GMT Anonymous Coward
Re: Mandatory encryption?
Damn. Thank your friend for bottling it then. Mandatory encryption was part of the original proposal, it should have remained mandatory.
Still it seems some working group members have decided to ignore those working against the public interest - "However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection."
-
-
-
Tuesday 5th August 2014 10:26 GMT batfastad
Self-signed
The requirement of SSL/TLS on HTTP 2 connections certainly will mean alot of people won't bother adopting it because of cert costs as you'd basically need a cert per vhost.
But the main benefits of HTTP 2 are performance and security. If you need the performance benefits of HTTP 2 then chances are you can afford to spring for a £7/year RapidSSL cert. Lil Bobby Website running his own little Wordpress, well he can carry on as he is. Lets face it, HTTP 1.1 won't going away for a long time.
From the ops side, SSL/TLS theoretically requires a dedicated IP address unless you use SNI. Since most browsers have supported SNI for years (FF since v3 or v4), IE on WinXP being the biggest group that doesn't, then this is no big deal. All that happens anyway is the user gets a cert warning and if they're using IE on Win XP they'll see alot of those and plenty of other warnings/broken sites around the place anyway. So that's pretty much a non-issue, using SNI reduces alot of complexity.
-
-
-
Tuesday 5th August 2014 20:06 GMT Anonymous Coward
Re: Self-signed
Browsers are not really the issue - the whole point of a browser means that there's usually someone there to respond to a warning if it's a problem (whether they understand the warning is another issue).
But HTTP is now part of the plumbing for all sorts of inter-process communications. They may well benefit from various aspects of HTTP 2.0, but they'll also go wrong in every harder to trace ways when encryption is added to the mix.
-
-
-
Wednesday 6th August 2014 20:29 GMT Michael Wojcik
Re: Self-signed
Lets face it, HTTP 1.1 won't going away for a long time.
Hell, HTTP/1.0 isn't going away anytime soon. Huge installed base (including a ton of embedded applications), non-proprietary, relatively well understood ... there's little incentive for most people to move away from 1.0 and 1.1.
And how many HTTP servers and user agents use all the performance features of 1.1? Last I checked, many implementations didn't pipeline requests, or use 100-Continue (at least not in an intelligent fashion), or If-modified-since, or sensible cache controls, &c.
I use HTTP all the time. I've written multiple HTTP implementations, both client- and server-side. I don't find HTTP/2.0 interesting at this point, and I don't expect to hear requests from my customers for it anytime soon. There are some big industry players who want it, and some techies who want it for ideological reasons. That's fine, and more power to 'em. But I'm not clambering aboard this particular bandwagon.
-
-
Tuesday 5th August 2014 15:33 GMT Christian Berger
Encryption with SSL is problematic
We all know that SSL is broken in so many ways that we actually should just abandon it and replace it with something more like SSH. Mandating SSL will only slow down that process, plus it'll cause lots of problems.
I do not see a point for compressing headers. The web isn't slow because we use a text based protocol that's uncompressed. The web is slow because idiotic web designers spread their contents across dozends of domains (causing DNS queries) and bloating the headers with cookies.
-
-
Wednesday 6th August 2014 07:37 GMT Christian Berger
Re: Encryption with SSL is problematic
I don't see how that would work. TCP is rather good at streaming data over long latency connections. You just push in your data and it'll come out with the latency of the line. Having a bit more or less data wouldn't change the latency.... Besides there are Websockets for that kind of thing.
-
Wednesday 6th August 2014 19:58 GMT Roo
Re: Encryption with SSL is problematic @ CB
"You just push in your data and it'll come out with the latency of the line."
Encoding and Decoding the message is > 0 cost, I was careful to specify "local" as well. A reduction in codec cost would yield benefits in power consumption AND latency, so there would be more cases where you can provide a ubiquitous web API instead of something more specialised and prone to misunderstanding + failure. That's all speculation and dreams until it hits the metal though. :)
-
-
Wednesday 6th August 2014 20:22 GMT Michael Wojcik
Re: Encryption with SSL is problematic
Compressed headers may reduce the round-trip latency for local REST services
I think that's dubious. It would only help where you have many requests over persistent connections; where the size of the HTTP header is significant compared to the size of the message-body; and where transmission time is significant in relation both to total turn-around time and to encoding and parsing time.
Amdahl's Law says you're not going to get much performance benefit by reducing header transmission time unless it's a big chunk of overall time.
Even with existing cases where header transmission time is significant, you're likely to lose most or all of the savings with encoding overhead. Particularly if you're using a high-level language that wants to do a bunch of allocation and other housekeeping under the covers as you compress and expand those headers.
And, frankly, if you're going from unencrypted requests (because these are local, yeah?) to encrypted ones (because the implementation you're using requires it, as some apparently intend to do), you've lost any benefit you may have had. Data copying alone is going to steal any savings from compressing headers.
-
Thursday 7th August 2014 13:54 GMT Roo
Re: Encryption with SSL is problematic
"I think that's dubious."
I can live with that. :)
"It would only help where you have many requests over persistent connections; where the size of the HTTP header is significant compared to the size of the message-body;"
In my experience that is not as rare as you may think with in-house REST services. 'Real-time' sensor data can generate a lot of header and not much data, and it's something we'll get more of with toasters acquiring internet connections.
"and where transmission time is significant in relation both to total turn-around time and to encoding and parsing time."
There are benefits to be had in terms of less traffic on slow main memory & I/O busses, as well as reduced cache pressure. Not that many people seem to care about that icky hardware stuff anymore... Can't entirely blame them if they're running code on a JVM that is running under a VM...
-
-
-
Wednesday 6th August 2014 20:15 GMT Michael Wojcik
Re: Encryption with SSL is problematic
we actually should just abandon it and replace it with something more like SSH
Equally broken, just in different ways.
The root problems are complexity (particularly due to cipher-suite explosion and backward compatibility) and PKI. We have better cipher-suite choices now than we did in the early days of SSL, but there's still no one size that fits all, and switching certainly doesn't help with backward compatibility. And SSH has never had anything useful to contribute to the PKI problem. ("Hey, I don't recognize this key. Do you think it looks good?")
There are basically three common approaches to PKI: throwing your hands up and pretending the problem doesn't exist (SSH in its default "I dunno" mode); X.509 certificate hierarchies (SSL/TLS), which are dreadfully complicated, hard to get right, incomprehensible to users, and a swell way for companies to make money without offering anything of value; and ad-hocracies like PGP's Web of Trust, which don't scale and aren't friendly to newcomers.
If there's a proposal for a sensible, comprehensive, usable PKI that could replace the dire mess we have now, I haven't seen it.
-