site: "this image is my hero image, make sure it's a high priority image,"
Ah. Will there also be a way for the user to ensure anything labelled "hero image" to be totally ignored? :-)
Cloudflare figures it has fixed the web, at least insofar as speedy page loading on its network is concerned. The content delivery biz on Tuesday revealed changes to its HTTP/2 Prioritization implementation that make websites load page resources – images, scripts, text and the like – more efficiently. "It's rare to have the …
When I visit a cloudflare site, waste many seconds on a testing your browser to check not a DDOS page before actually getting to the content I requested.
Not much point shaving a few milliseconds off page load when there is that issue of elephant in the room proportions on overall time to load.
... and it presents a total deadend for people like me who don't or can't enable javascript.
Particularly annoying when I use "w3m" on the command line, which is quite often.
cloudflare have come up with some nifty things - they know their stuff - but in this case, their decision sucks monkeys balls, and if people knew what was going on, less of them would enable it.
Perhaps oddly, the only site I have noticed that problem with is actually The Register (I think when I switch my mobile between mobile data and WiFi. I'm guessing that some cookie gets set which then causes the site to take a huff when my IP address changes?).
Yes, but no. Its progressive jpeg but for multiple progressive jpegs at once.
Having 10 progressive jpegs on your site isn't much use if the first one has to load fully before the next one starts.
Cloudflare"s technique allows all 10 to progressively load at the same time.
I actually thought about GIFLink, an external X/Y/ZModem protocol software that showed GIF pictures as you downloaded them from BBS's, although interlaced GIF images were not that common, IIRC. It also allowed you to abort the currently transferred image if the image didn't fill your needs...
The HTTP protocol has always been quite inefficient, as soon as we got past "I just want a single plain HTML page from a single-IP domain name with no security", it became inefficient and a mess.
We added gzipped content, we added TLS and SNI, we added cached content, cache control and all kinds of X-headers, we added multiple request streams which overwhelmed things so we added pipelining multiple requests in a single stream, etc.
It needed a reboot.
I quite understand the problem - watch the GIF. Edge just waits until it knows where everything goes. The earlier versions of modern browsers just splatted to the screen and moved things later (which creates a mess of movement and wrong-clicks). The new versions now protoype page layout based on available information, request resources in the background, and fill in the gaps as they come in.
There's nothing in HTTP2 that's ground-breaking. We've just gone from a single human-readable conversation to an encrypted, shared-pipeline channel with all kinds of content typing and prioritisation.
What gets me is that with all that reduction in latency, compression, etc. we still don't have anything approaching a website that actually loads fast. If I made an HTML table with a couple of optimised JPGs for the page in that GIF, I could splat that on screen over even the slowest connection way before those browsers manage to render it (I'm guessing that demo is exaggerated by using a very low connection speed, because modern browsers aren't THAT slow), just by making sure that I send the least amount of data I can in the simplest format.
All that CSS, JS, etc. nonsense results in megabytes of load for a simple page, plus conditional display and execution based on running that code, which has to be done after downloading from half-a-dozen different places.
Though it's a step-forward, we're still just making unnecessarily bloated sites.
Even this page - a list of forum comments, a handful of links to other articles, and an ad or two, is currently running 30Mb of JS virtual machine while I type this comment, not to mention downloading dozens upon dozens of images and JS files.
One day we're gonna hit a physical limit, and then people will have to learn to optimise again.
Where once they could pop ten ads in a split second now they will be able to pop fifteen ads and a half, maybe even more if they improve the technique further, they are working on it.
That's really great!
In theory, your site over unencrypted HTTP could be altered by a MitM attacker, causing your viewers browsers to load resources that you did not expect them to load. Those resources could include malware, or adds for companies you don't like, or don't wish to be associated with your site. The very content of the page could be altered to say things you find abhorrent.
In practice, you probably don't if your site has no javascript, or paid advertising. The risks are mostly to your viewers, not to you anyway.
That said, the cost these days of enabling SSL for a small site are minimal, and I don't just mean that LetsEncrypt offer free certificates, but modern processors (atom processors included) often contain hardware acceleration for many of the cryptographic functions used by SSL, and if you take the time to set up an ACME client your certificate can be renewed automatically with no further effort on your part.