back to article Google Chrome web protocol seeks 2x download speeds

Google is developing a new application layer protocol designed to speed the movement of stuff across the web. It's called SPDY, pronounced, yes, speedy. Unveiled Thursday with a post to the Google Research blog, this "early-stage" research project is specifically designed to reduce latency via things like multiplexed streams, …


This topic is closed for new posts.
  1. Si 1

    Sounds interesting but...

    ... given IE's market share you can't really build any apps that rely on this until MS decide to support it. Plus, with shit like IE6 and IE7 still hanging around that could be a long time comnig.

    Also you can bet Microsoft's idea of supporting this will be to release their own version that's not compatible with anything else.

  2. Anonymous Coward
    Anonymous Coward


    get support added to apache and it'll become a standard protocol supported everywhere (as sites gradually update they will get support, then browsers will add support, so then other web servers will add support)... don't get it added to standard apache enabled by default, well it's probably not going to make it very far

    so apache guys, question for you: will this protocol become standard or will it fail? the decision is yours...

  3. Anonymous Coward
    Big Brother


    OK kids,

    Hands up everyone who knows what comes after 'extend'.

    And remember... you know how you laugh at all those meek people who use the term 'google' when they mean 'the internet' or even 'the web'? Well guess who's going to get the last laugh...

  4. Anonymous Coward
    Thumb Up

    bring it

    About time, too!

  5. foxyshadis

    @Si 1

    I think the idea is not for apps to rely on it, but for it to be entirely invisible - the browser will ask the server if it's supported, if the server responds it's used and otherwise plain HTTP is used. Kind of like url fopen or disk caching, it's not application-level. It's just an architectural change.

  6. Winkypop Silver badge
    Thumb Up

    Hey. more speed

  7. Anonymous Coward

    Forking hell

    Forking the Web? Great idea, Google. Now fork off and die.

  8. Annihilator
    Thumb Up

    Sounds good - ish

    Looks like they've twigged to how wasteful HTTP actually is. Multiple requests/streams per web page, not to mention that HTTP requests are effectively in English, and largely the same information.

    The pedant in me though, says that 55% faster isn't a 2x web though - it's a 1.55x web :-)

  9. Andres
    Thumb Up

    Still relaxed

    I really don't know why but I still don't feel concerned about Google getting into every crevice of the internet. I guess part of it is that they do generally do it with open standards and that it is very much in their interest to make it accessible (and not evil). I seem to have a cynicism blindspot for them.

  10. Filippo

    Sounds nice

    Request prioritization is a good thing. I can't count how many times I've been frustrated because a web site gets stuck on loading some stupid 200kb image, or some element located on a different server that happens to be overloaded, when all I'm interested in is the bleepin' text and I *know* it could be loaded in a tenth of a second.

  11. Pascal Monett Silver badge

    Just one question

    What are the vulnerabilities of this new protocol thingy that sits between two layers ? How can it be bent to some nefarious will ? Are evil haxxors going to get ideas and add yet another set of problems to our Internets ?

  12. Anonymous Coward
    Big Brother

    What about IETF and the RFC process

    Why on earth, if they want to introduce a new standard, don't they use the IETF and RFC process that have proven to work and deliver true standards, agreed upon by a large panel of experts and stakeholders?

    I find that 'look, this is how we're going to do it' attitude extremely pretentious at least and potentially dangerous...

  13. Bassey

    Re: Si 1

    Amusing. You slag off Microsoft because they may, at some point, ruin Google's plans by....doing exactly what Google has just done. Although, of course, that's fine because everyone knows MS are evil and Google are as pure as the driven snow.

  14. Eddie Edwards


    Someone invents a network protocol and everyone is up in arms.

    Jesus, let's go back to pen and paper, shall we?

    Oh wait, Gutenberg "embraced and extended" that, didn't he!

    Bad Gutenberg. Don't you know you should "do no evil"?

  15. Craig 12

    re: "early-stage" research

    So Google have finished their work and shipping it Monday?

  16. Anonymous Coward
    Thumb Up

    @What about IETF and RFC process

    The trouble with IETF, RFC, W3C, ECMA etc is that they really know how to bodge a standard - and in the process take years to do it. Look at the shitty "ratified" standards out there for POP3, SMTP, HTTP, HTML, Javascript etc - they are all pants. What is the use of all these committees to check standards, allow all sorts of proprietary rubbish to seep in and then keep coming up with the rubbish that they have.

    I actually have sympathy with Microsoft, Google and Sun (Java) because the standards processes are just an excuse for company retards to have a few meetings to discuss politics and get an expenses paid vacation, elongating the process to ensure that next years vacation is sorted. By the time the "community" finish with this protocol it will be ruined. Sometimes you need a big company to come in and just produce a protocol - then things will start to happen.

    Google here are doing the right thing and it is in everybodies interest if there are no patent issues - and hopefully it will be adopted because HTTP is a bit old fashioned now and there is scope for improvement, especially now that HTTP is being used for Web 2 communication. If, like AC mentioned, Apache and Mozilla adopt it then it will become standard.

  17. ad47uk
    Thumb Down

    But this is Google

    this will not be good, great it may speed up the net, but if Google jhave anything to do with it they will be able to spy on what ever website you visit.

    at least the moment I can stop cokies , web bugs and other ways of spying, but if Google does this, how will we stop[ them?

    Be just like Phorm that wanted to grab our traffic in the U.K

    Google have got too big and too much in peoples face.

  18. IndianaJ
    Thumb Up

    Sounds good to me

    Anything that 'invisibly' extends HTTP for the better can only be a good thing. How old is HTTP 1.1 now? Is there a HTTP 2.0 on the horizon? No. So good on Google for taking this forward at last.

  19. Jason Alcock

    Opera is also helping out in it's own way...

    I think this is a area that should be optimised. And let's not forget Opera are trying to help out too:

    I'm perfectly happy for one or more solutions, once proven, then to be rubber stamped as a standard.

    As the icon says... Go!

  20. pklausner

    This is good - for Google, not for us

    The German security hacker gives a good analysis: the basic features don't buy you much. The advanced feature "Server push" will reduce perceived latency significantly, especially for first time visits. The price is shoving everything every time down to the client - thus easily duplicating network load due to all the objects which currently are cached on the client side.

    The good thing for Google is, that ad-blockers will not give you any performance gain any more. And selling ads is Google's core business.

  21. Robert Carnegie Silver badge

    Proxy server?

    If this is put into your local proxy server, do you need it in the browser? Maybe?

  22. Giles Jones Gold badge


    Let IE stay in the slow lane while everyone else benefits from SPDY.

    If servers fall back to HTTP for IE then IE will have a slow web experience until Microsoft supports it.

    Of course with their usual EEE strategy they would release MSSPDY which will work well with IE and have quirks with anything else.

  23. Quirkafleeg

    Re: This is good - for Google, not for us

    “The price is shoving everything every time down to the client…”

    Definitely not good for us. That would eat into download allocation and would appear to make local caches & proxies pointless (at least where this protocol is supported).

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019