Strangeloop – a Vancouver-based outfit offering an online service for accelerating website load times – has embraced Google's SPDY project, a new application-layer protocol designed to significantly improve the speed of good ol' HTTP. The Canadian company claims that it's the first outfit to offer a commercial product to let you …
This all sounds interesting, but Bixby loses credibility saying things like "On the other side, you look at the evolution of TCP and HTTP over the last twenty years, and it's tough to argue that things are progressing. There has been almost no significant changes to those protocols to help with the bottlenecks we're facing"
Just off the top of my head, the RFC for HTTP 1.1 was 1999, ie, 12 years ago. If he's unaware of that, and the performance benefits that offered over HTTP 1.0, then I can't see he's in any way qualified to comment on what improvements have been made to TCP or HTTP in the last 20 years to combat bottlenecks.
10 - 20% meh.
Goodness, what will I do with all that time.
Fail...they should just make browsers play relaxing elevator music while pages load.
Re: 10 - 20% meh.
This isn't really for the benefit of the end users (although they might like to say that), it's for the people running the servers, why else do you think Google developed it?
Deploying smooth jazz
in 3... 2... 1....
This would be interesting if there were plug-ins for the various proxies, e.g. Squid. Then a site (e.g. an office, not a web site) could deploy the plug-in on a proxy at the firewall, and accelerate the connections over the (relatively slower) WAN connection, while allowing the clients on the (much faster) LAN side to remain unchanged.
AND, were the plug-in also able to operation in accelerator mode, an existing web site could apply the acceleration on the accelerator proxy, and not need to modify the existing server.
Never mind Social Media plugins...
I run my own DNS, and so can block the unwanted advertisement and tracking servers at the nameserver level.
Other people's wi-fi connections always seem slow by comparison .....
Welcome to the Google's Web
You know the one with protocols that aren't actually properly documented - no, you can't call a document with TODO notes as proper documentation.
Oh and it's great for ads because no matter what your ad blocker says, the server will push them through anyway, to "optimise performance".
This is the thing
SPDY probably has a pile of merits. But it needs a proper review and standardization process to make sure the protocol is sane and well specified. Just like WebM, WebP, NaCl etc. etc.
It will be other browsers that suffer if they struggle to implement half baked badly documented specs that Google can change on a whim.
I also suspect that SPDY / WebP is the precursor to Google going after Opera's remote proxy business.
NaCl is a standard?
How do I check whether mine is compliant? Is there a standard chip to test it against?
What about CH3COOH?
Google would like it to be one
Google are promoting NaCl and PNaCl with the aim of making them standards and have open sourced the efforts. I think NaCl is a hack because it uses native instructions rendering it useless for cross architecture apps. But PNaCl is LLVM based and has long term promise.
But it needs to be formally defined and subject to review. The APIs that apps can see, the security model, the permissions, the multithreading, the interaction with the DOM, storage etc.
Just shoving out some open source reference implementation or "trusting" Google with the standard is not acceptable for something which other browsers would need to implement. It virtually guarantees Microsoft and Apple won't play ball. Mozilla turned down a similarly half baked spec such as the recent WebP image format.
Sounds poisonous to me. I want Phosphorus-free salt, thankyouverymuch!
The one with the periodic table in the pocket, thanks
"And any trick in the book we can use to make the HTML faster, we use it."
Hmm. A good trick, provided the resulting code still validates and still behaves properly in all target browsers (which probably includes IE6). That's hard enough to do without optimization, so I'd be interested to see what this does to those carefully-designed pages. And what about all those webpages out there that invoke quirks mode in the browsers (i.e. they contain invalid code). Will SPDY sort these out, or will it fall back to taking no action?
I first wrote in HTML when nearly everyone was connected through a 56k modem, and I used a rule of thumb that a page shouldn't be larger than 20kB, including any resources (images etc) that it loaded. I never found this much of a limitation, although I wasn't, of course, displaying Flash animations and video advertisements. I don't know what I'd use as a rule of thumb today, but it can't be difficult to work out the maximum size a page can be before your target customers start losing interest while it's loading.
It seems to me that technology like this is simply encouraging bloated website design.
I agree somewhat..
56k dictated the page size, the high latency ensured that size was chopped down even more, and as a result the smaller the page generally the better the experience.
I don't have figures to compare latency over 56k connections over ADSL or cable connections these days in the uk, but i guess google is trying to hide that latency by rewriting that protocol.
If people stuck to writing efficient code, this wouldn't be so much of a problem...
If people stuck to writing efficient code...
The worst problem I've seen was a homepage which contained a few paragraphs about the author, and a picture of him. The image size was constrained by the layout to little more than thumbnail, but the source of the image was a 10MB bitmap, which the browser had to download and then render into the <img> block, throwing away 97% of the information downloaded. I fixed this by by converting the 10 MB BMP into a 50 kB JPG. You won't be surprised to hear that the client objected, saying he couldn't see any difference...
The title is required, and must contain letters and/or digits.
"A good trick, provided the resulting code still validates and still behaves properly in all target browsers (which probably includes IE6). "
Since IE6 don't support the new protocol, data for IE6 would still be transmitted 'the old way'.
Re: HTML optimisation.
"Will SPDY sort these out..."
You didn't read the article then? That said that Strangeloop use a variety of optimisation techniques of which SPDY is one and one specific to Chrome at that. So no, by definition it won't sort out anything at all related to browser compatibility since none but Chrome will ever see it used.
"the bottlenecks we're facing"
bottlenecks are simply unnecessary page bloat plus shedloads of ads. Scrub the ads = instant turbo. For free. Scrub jscript as I do and you've closed many security holes (not least cos 1/2 the pages won't behave correctly, idiots...)
Some knowledgeable ones on slashdot got stuck into the details recently and it wasn't pretty. My own question is, much of SPDY looks like a well-established protocol called BEEP, why not use that?
BEEP is an application protocol framework for connection-oriented, asynchronous, request-response interactions. This particular subset supports a large class of Internet applications, and provides solutions to common design issues for those applications, including: framing, segmentation, structuring, and multiplexing of messages, along with authentication and privacy.
Hello google architect, why not BEEP?
"Scrub the ads = instant turbo. For free."
You do know why the ads are there, right? To earn money. Taking them away is not "free".
Let me guess
Works best if you use chrome.
"At the moment, Google Chrome is the only browser that uses SPDY on the front-end."
You really get a lot of information, if you read the article.
I can download quicker!
- Doctor Strangeloop
Please tell me I didn't just see that word used in a NON-ironic manner. I weep for IT journalism.
HTTP Compression came along years ago, and if anything would have been more use then. But how many sites enable it?
Going the way of thrice?
What ever happened to "twice as fast"? It's got three less characters to type than "two times faster"
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Beijing leans on Microsoft to maintain Windows XP support
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?