Google is developing a new application layer protocol designed to speed the movement of stuff across the web. It's called SPDY, pronounced, yes, speedy. Unveiled Thursday with a post to the Google Research blog, this "early-stage" research project is specifically designed to reduce latency via things like multiplexed streams, …
Sounds interesting but...
... given IE's market share you can't really build any apps that rely on this until MS decide to support it. Plus, with shit like IE6 and IE7 still hanging around that could be a long time comnig.
Also you can bet Microsoft's idea of supporting this will be to release their own version that's not compatible with anything else.
get support added to apache and it'll become a standard protocol supported everywhere (as sites gradually update they will get support, then browsers will add support, so then other web servers will add support)... don't get it added to standard apache enabled by default, well it's probably not going to make it very far
so apache guys, question for you: will this protocol become standard or will it fail? the decision is yours...
Hands up everyone who knows what comes after 'extend'.
And remember... you know how you laugh at all those meek people who use the term 'google' when they mean 'the internet' or even 'the web'? Well guess who's going to get the last laugh...
About time, too!
I think the idea is not for apps to rely on it, but for it to be entirely invisible - the browser will ask the server if it's supported, if the server responds it's used and otherwise plain HTTP is used. Kind of like url fopen or disk caching, it's not application-level. It's just an architectural change.
Hey. more speed
Forking the Web? Great idea, Google. Now fork off and die.
Sounds good - ish
Looks like they've twigged to how wasteful HTTP actually is. Multiple requests/streams per web page, not to mention that HTTP requests are effectively in English, and largely the same information.
The pedant in me though, says that 55% faster isn't a 2x web though - it's a 1.55x web :-)
I really don't know why but I still don't feel concerned about Google getting into every crevice of the internet. I guess part of it is that they do generally do it with open standards and that it is very much in their interest to make it accessible (and not evil). I seem to have a cynicism blindspot for them.
Request prioritization is a good thing. I can't count how many times I've been frustrated because a web site gets stuck on loading some stupid 200kb image, or some element located on a different server that happens to be overloaded, when all I'm interested in is the bleepin' text and I *know* it could be loaded in a tenth of a second.
Just one question
What are the vulnerabilities of this new protocol thingy that sits between two layers ? How can it be bent to some nefarious will ? Are evil haxxors going to get ideas and add yet another set of problems to our Internets ?
What about IETF and the RFC process
Why on earth, if they want to introduce a new standard, don't they use the IETF and RFC process that have proven to work and deliver true standards, agreed upon by a large panel of experts and stakeholders?
I find that 'look, this is how we're going to do it' attitude extremely pretentious at least and potentially dangerous...
Re: Si 1
Amusing. You slag off Microsoft because they may, at some point, ruin Google's plans by....doing exactly what Google has just done. Although, of course, that's fine because everyone knows MS are evil and Google are as pure as the driven snow.
Someone invents a network protocol and everyone is up in arms.
Jesus, let's go back to pen and paper, shall we?
Oh wait, Gutenberg "embraced and extended" that, didn't he!
Bad Gutenberg. Don't you know you should "do no evil"?
re: "early-stage" research
So Google have finished their work and shipping it Monday?
@What about IETF and RFC process
I actually have sympathy with Microsoft, Google and Sun (Java) because the standards processes are just an excuse for company retards to have a few meetings to discuss politics and get an expenses paid vacation, elongating the process to ensure that next years vacation is sorted. By the time the "community" finish with this protocol it will be ruined. Sometimes you need a big company to come in and just produce a protocol - then things will start to happen.
Google here are doing the right thing and it is in everybodies interest if there are no patent issues - and hopefully it will be adopted because HTTP is a bit old fashioned now and there is scope for improvement, especially now that HTTP is being used for Web 2 communication. If, like AC mentioned, Apache and Mozilla adopt it then it will become standard.
But this is Google
this will not be good, great it may speed up the net, but if Google jhave anything to do with it they will be able to spy on what ever website you visit.
at least the moment I can stop cokies , web bugs and other ways of spying, but if Google does this, how will we stop[ them?
Be just like Phorm that wanted to grab our traffic in the U.K
Google have got too big and too much in peoples face.
Sounds good to me
Anything that 'invisibly' extends HTTP for the better can only be a good thing. How old is HTTP 1.1 now? Is there a HTTP 2.0 on the horizon? No. So good on Google for taking this forward at last.
Opera is also helping out in it's own way...
I think this is a area that should be optimised. And let's not forget Opera are trying to help out too: http://www.opera.com/business/solutions/turbo/
I'm perfectly happy for one or more solutions, once proven, then to be rubber stamped as a standard.
As the icon says... Go!
This is good - for Google, not for us
The German security hacker http://blog.fefe.de/?ts=b402b9c9 gives a good analysis: the basic features don't buy you much. The advanced feature "Server push" will reduce perceived latency significantly, especially for first time visits. The price is shoving everything every time down to the client - thus easily duplicating network load due to all the objects which currently are cached on the client side.
The good thing for Google is, that ad-blockers will not give you any performance gain any more. And selling ads is Google's core business.
If this is put into your local proxy server, do you need it in the browser? Maybe?
Let IE stay in the slow lane while everyone else benefits from SPDY.
If servers fall back to HTTP for IE then IE will have a slow web experience until Microsoft supports it.
Of course with their usual EEE strategy they would release MSSPDY which will work well with IE and have quirks with anything else.
Re: This is good - for Google, not for us
“The price is shoving everything every time down to the client…”
Definitely not good for us. That would eat into download allocation and would appear to make local caches & proxies pointless (at least where this protocol is supported).
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip