Chiseled in stone
A certain party's election pledges were also chiselled in stone. Doesn't mean we're ever going to see them.
HTTP/2 was signed off back in February, but the spec took its final step towards becoming a standard on Thursday US time with the publication of rfc7540. RFCs - request for comments - are counterintuitively named because as the Internet Engineering Task Force explains, “Once an RFC is published, it is never revised. If the …
The first version of HTTP (http 1.1) was standardized in 1997 and today is.. well nevermind
Ironically, those involved in setting this up were complaining that "the standard was being prepared on an unrealistically short schedule" causing some new tech to not be included, some missed opportunities, and in generally lots of complaints and grumbling from various parties.
To be fair, work on HTTP/2 started in 2012, so that's only 15 years after HTTP/1.1 was released.
I can't believe how, after so many years, and for something as important as HTTP, they still can't get it right.
Poul-Henning Kamp has written a nice article on the politics involved and why some stuff is as it is today -- http://queue.acm.org/detail.cfm?id=2716278
But you can keep revising a spec forever, as you tinker and add another feature along comes some new ideas or some new tech that you add in. In the end you just get feature bloat and a never ending specification document.
Sometimes it is better to set a stricter brief, complete it and get it out there and in use and then start drawing up the features for the next .1 release. As long as there is backward compatibility then it doesn't really matter.
Not to be overlooking HTTP/1.1 revision 1 (RFC 2068), revision 2 (RFC 2616), and revision 3 (RFCs 7230-7325).
This is a standards body that spent 7+ years on the last one(s) and still could not gather the agreement to bump the version number from 1.1 to 1.2.
So yeah, 24 months ... Nov 28 2012 (first draft) - Nov 29 2014 (last draft that anybody implemented and tested). To write, review, test and rollout a major re-design of the worlds third most popular protocol. That's fast.
The grumbling about time was that only a few browsers and web servers have had time to implement much of it yet. The larger group of vendors - of IoT devices, printers, firewalls, load balancers, routers, AV vendors etc - have barely had a chance to read and think about it before the word came down from in high (early-mid 2014) that browsr were happy so no more changes would be allowed and if it did not suit anyone elses needs already tough luck.
>What, you don't remember HTTP/0.9 and HTTP/1.0 ?
Comin' over here with yer fancy pipe-lining!
Gerrof my lawn!
HTTP/2 I hope it says, "This protocol MUST only be used for hypertext." Yer can take yer fancy exchange and database protocols and shove 'em where json and the argenotts never went.
So you'd rather stay with a protocol that confuses server load balancers and thereby breaks transactions because it uses multiple independent TCP streams?
Yes. If your distributed transaction monitor can't handle that case, then get a better one, or don't extend the unit of work past a single request.
HTTP/2 does a fair bit wrong (as the aforementioned Kamp article1 notes), in its attempts to please too many masters and preempt Google, and not a lot right. There are incremental improvements for use cases that have little to do with HTTP's original purpose, so basically they're stovepiping the protocol rather than forcing people to use something more suitable for task.
I'm in no hurry to implement it, and I bet it'll be a long time before I hear much demand from my customers for it.
1I know PHK is rather the curmudgeon; he seems to be vying with Erik Meijer for the title of Grumpiest Computer Scientist, lately held by Dijkstra. But in my book PHK makes more sense, even when I don't entirely agree, and I'm a bit more impressed with his contributions than Meijer's. I mean, I love me some programming language design, but the world runs on NTP.
It has a few useful features:
- Server pushing. That means lower loading times on complex websites, as are fairly common these days. Rather than the browser and server playing tennis, the server can send the required resources in anticipation of the browser's need.
- Multiplexing of responses. Doesn't matter really for static content, but very useful for dynamically generated.