back to article HTTPS bent into the next super-cookies by researcher

A UK consultant has demonstrated how a feature of the secure Web protocol HTTPS can be turned into a tracking feature that is, in the case of some browsers, ineradicable. HTTP Strict Transport Security (HSTS), described in RFC 6797 (here), is a mechanism that helps sites redirect users from the insecure HTTP version to the …

  1. DryBones

    Without having a firm understanding of the protocol (not a network guy, not going to be arsed to go read it), I think my first question be if the includeSubDomains flag actually did get put in as Mikhail said or not.

    Seems like another of those things that would stand out like a beacon via Ghostery/NoScript/ScriptSafe.

  2. MrT

    So...

    ...would a solution be to turn off FF or Chrome secure browsing and use something rules-based like the EFF's HTTPS Everywhere ?

    1. Anonymous Coward
      Trollface

      Re: So...

      Or use IE

      1. Anonymous Coward
        Anonymous Coward

        Re: So...

        Or use IE

        heh yeh - and still be susceptible to simple MITM attacks.

    2. Anonymous Coward
      Anonymous Coward

      Re: So...

      More like:

      If you want this "super cookie" to be removed, then clear your cookies - and use a different browser for incognito/private browsing.

    3. Don Dumb

      Re: So...

      @MrT - But HTTPS Everywhere doesn't cover every site, mostly the major ones I seem to remember. So if I undertsand this correctly, all you would need is to go to a site that isn't covered by HTTPS Everywhere, that then redirects your browser to the secure version and you have the super-cookie on your browser.

      Perhaps this needs a Firefox extension of its own?

      1. MrT

        Re: So...

        There was talk of importing the inbuilt HSTS tables into HHTPS Everywhere from FF and Chrome as rules. The problem is that they both include HSTS as default, so folk are using this already. Clearing the cache/history on exit doesn't clear this tag, but using something like CCleaner might.

        Either way, it's a workaround until the problem can be addressed properly...

  3. Anonymous Coward
    Anonymous Coward

    301 redirect...a little scruffy, but it works.

    1. Anonymous Bullard

      Yes, 301 is the tried and tested way - it's worked for years, and still works.

      However, HSTS not only redirects but it also says "and don't ever try http again on this domain [or subdomains]" (even if "http://" is typed in).

      You can add domains to the browser's pre-load list so every modern browser (so not IE) will know never to try HTTP on your domains.

    2. vagabondo

      301 redirect...

      But. That is a server-side "solution". It does not protec the client from a malicious web-site. This "super cookie" problem requires a client-side solution.

      If this was a cookie, it should only be readable by the server that set it. However this flag seems to be readable by any contactedserver. This looks like a flaw in either the protocolor its implementation.

      1. sabroni Silver badge

        Re: If this was a cookie, it should only be readable by the server that set it.

        Exactly! Isn't the issue here about boundaries not protocols? Why can other sites see a domain specific secret?

        1. Anonymous Bullard

          Re: If this was a cookie, it should only be readable by the server that set it.

          Isn't the issue here about boundaries not protocols? Why can other sites see a domain specific secret?

          No - the issue is about being able to use it as part of a "fingerprint", used to identify and track visitors (over multiple cooperative domains, and regardless of normal/private mode).

          If a site wants to store legitimate domain-specific secrets (like an authentication id), then they'd be using cookies like the rest of the world.

          1. sabroni Silver badge

            Re: the issue is about being able to use it as part of a "fingerprint",

            Sorry, that explanation doesn't really help.

            What is the "it" you're referring to?

            The article says:

            >> His point is that an HSTS “pin” is set for each HTTPS-redirected site you use, it's unique to user and site, and it's readable from your browser settings by any site <<

            That looks like a domain issue, specifically "it's readable from your browser settings by any site". Is the article wrong? What am I missing?

            1. Anonymous Coward
              Anonymous Coward

              Re: the issue is about being able to use it as part of a "fingerprint",

              Is the article wrong? What am I missing?

              Perhaps you should read the original reports

              1. Michael Wojcik Silver badge

                Re: the issue is about being able to use it as part of a "fingerprint",

                Perhaps you should read the original reports

                Indeed. For those with time to complain about the problem, but not the time to read about it, the process of determining whether a given URL has a corresponding HSTS tag in the UA goes like this:

                1. Make a Javascript request (e.g. by inserting an IMG element into the DOM with non-display styling) for the URL, using the "http" scheme.

                2. Query the DOM to see whether the response used the "http" or "https" scheme. If it's the latter, then HSTS applied.

                The Javascript in question can be used in pages originating from an unrelated site. SOP doesn't restrict the URLs of IMG or other remote-resource-fetching elements created by scripts.

    3. Michael Wojcik Silver badge

      301 redirect...a little scruffy, but it works.

      It's not as comprehensive as HSTS. As others have noted, HSTS tells the UA to redirect requests to the secure site, rather than having the server do it.

      As I understand it, there are two motives for HSTS. One is to reduce load on the server. Yes, it's trivial to have an HTTP server answer any request received over an insecure channel with a 301 (Moved Permanently) redirection to the secure site. But that does require processing on the server, for every request, until the UA stops requesting new resources.

      The other is to prevent the UA from ever sending another insecure request to the site, and that's critical because the UA will send any cookies set by the server over those insecure requests. So if server-side redirection is used instead of HSTS, an attacker who can passively (eg by capturing packets) or actively (eg by DNS poisoning) intercept the request can trick a user into making a request on the insecure channel and steal that user's session cookie. (Getting a user to make a request to the insecure URL is trivial if HSTS or some equivalent, such as SSL Everywhere, is not being used; a web bug does the job.)

      From the article: which seems odd to El Reg since IE manages to navigate the Web without supporting HSTS at all.

      El Reg needs to understand the latter issue with redirects, and why HSTS is important. It's not a question of "navigat[ing] the Web". I can "navigate the web" with netcat and hand-written HTTP requests; that's utterly irrelevant to this issue.

      A fingerprinting vulnerability is bad; a session-theft vulnerability is worse.

  4. Dan 55 Silver badge
    WTF?

    Eh?

    Why is there no same domain policy, like cookies?

    1. Anonymous Coward
      Anonymous Coward

      Re: Eh?

      These "cookies" don't have much data in them, they're just a flag (2 possible values, but you can combine them to store more data), and the name of them will have to be guessed. At best you'll be storing a unique identifier which means the domains will have to be in co-operation with each other anyway. You're not going to get any private information from it (normal cookies are used for that).

      1. Anonymous Coward
        Anonymous Coward

        Re: Eh?

        The article implies there is a browser specific secret number that can be used by other sites to track the browser. This is supposed to be some kind of super-cookie, which means an id that the user can't clear.

        1. Anonymous Coward
          Anonymous Coward

          Re: Eh?

          "some kind of super-cookie, which means an id that the user can't clear."

          From the article: "some browsers allow the HSTS flags to be cleared, so that in Chrome, Firefox and Opera"

          However: "When using Safari on an Apple device there appears to be no way that HSTS flags can be cleared by the user"

          And they're flags - it has two values (it will take 8 of these "cookies" to store a single character, 16 for an identifier for 65535 users), and the name of them is unknown to anyone except the one who set them.

          1. Anonymous Coward
            Anonymous Coward

            Re: Eh?

            From the article:

            His point is that an HSTS “pin” is set for each HTTPS-redirected site you use, it's unique to user and site.

            You say:

            And they're flags - it has two values

            One of you is wrong as there are more than two user agents!

          2. wdmot

            Re: Eh?

            @AC

            I think you have it right, basically. But it doesn't take a bunch of different websites cooperating -- all it takes is one site with many virtual addresses. Look at the source code at RadicalResearch's blog on the issue where they demo the ability to create this "super cookie". Can't say I understand it completely, but it appears to use 32 different virtual servers "0-hsts-lab.radicalresearch.co.uk" through "1f-hsts-lab.radicalresearch.co.uk" to create a unique 32-bit identifier which the main radicalresearch.co.uk site could then use for tracking you.

            Maybe someone savvy can take a look at that and explain it to us? ^_^

          3. Michael Wojcik Silver badge

            Re: Eh?

            it will take 8 of these "cookies" to store a single character

            I'm afraid not. The vulnerability is indeed a binary channel, so one bit per HSTS flag, but 8 are required for a single character only if you need to encode 256 different characters. Much more compact encodings are possible, since the attacker controls both sides of the channel, has a very large symbol set to choose from, and can indulge in a great many decoding trials.

            Davidov's original discussion (link in the article) points out that the vulnerability basically gives the attacker a Bloom filter. Combine that with a predictive decoder along the lines of PPMD (or, more generally, all sorts of possible Markov-process approaches), and probably a dictionary compressor like star coding, and a relatively specific domain of messages to encode, and an attacker could feasibly store quite a bit of data using the HSTS channel.

            Implementation is left as an exercise for the reader.

    2. Michael Wojcik Silver badge

      Re: Eh?

      Why is there no same domain policy, like cookies?

      I've just answered this in another thread above, but: The Same-Origin Policy doesn't apply because the malicious script doesn't examine the HSTS data directly. It makes requests for URLs using the "http" scheme, and then sees whether the responses use the "https" scheme.

      You can do this manually if your browser lets you see where page resources come from. Create a local HTML file with an IMG element for an image hosted on an HSTS site that you've visited in the past; make sure the src attribute value uses the "http" scheme. Open the page in your browser, then check to see where the image actually came from, and you should see that it was actually fetched using an https-scheme URL.

      A script can do the same.

      So the HSTS tag is never directly visible to the script. Instead, the script probes to see whether a given URL is subject to HSTS transformation by the UA (browser). And so SOP doesn't apply.

      By itself, this could be used for simple history tracking, similarly to what people have done with style-snooping (scripts that check the color of links to determine whether you've visited them recently). The actual fingerprinting attack described by Davidov and then Greenhalgh involves a malicious site that uses HSTS, has a wildcard certificate, and does not set the includeSubDomains flag on its HSTS tags. That lets the site set multiple HSTS tags, and so it can encode arbitrary data by setting some subset of a large set of possible tags.

  5. Anonymous Coward
    Anonymous Coward

    Erm...

    "Nor do “private” or “incognito” browsing modes help."

    I'm using Iron. I got one code from the demo site in "normal" mode, and a different one when using Incognito mode. Both were persistent, although a test with Sandboxie kills it, as any changed files which contain the HSTS info are zapped.

    1. Michael Wojcik Silver badge

      Re: Erm...

      Greenhalgh recently updated his post to say that Firefox 34.0 no longer shares its HSTS database between normal and private-mode windows, so for FF 34 private-mode does "help", in the sense that it prevents this tracking vulnerability - but potentially allows the sort of exploits that HSTS was introduced to prevent.

      On the other hand, since cookies aren't shared between normal-mode and private-mode windows either, you couldn't expose a normal-mode session cookie by forcing a downgrade in a private-mode window (taking advantage of the HSTS split). I don't immediately see any new vulnerability introduced by this change.

      On the third hand, apparently the Chromium developers have had several stabs at reducing the attack surface and reverted each one, so there seem to be subtleties to any obvious remediation.

  6. sabroni Silver badge

    Investigation helps not!

    So reading up on this on Wikipedia and others I see that HSTS is effectively an https only header that tells a browser to i) communicate with the domain using only https for a specific time and ii) interpret any secure transport errors as meaning it should stop communication immediately. The header is ignored on http requests and shouldn't be sent.

    So in normal use there is a potential for MITM attacks during your first contact with an HSTS site as the redirect to https happens with the usual 301. Once you connect with https you get the special header and your browser knows to always communicate to the domain using https, making further MITM attacks very difficult.

    No where in that is there any requirement for this information to be shared with any other domain, or any advantage to doing so. I don't see anywhere in there a requirement for a magic number between the domain and the browser. What this generates is a private list of sites and durations that the individual browser uses to force https on certain sites.

    So how did this become a tracking issue?

    1. sabroni Silver badge

      Re: Ahh, but reading the original article does

      Ok, it's not very clear from the article but as far as I can tell it's to do with using the fact that a single site is HSTS enabled as a bit and storing an identifier by hitting lots of sites.

      So (I think) the idea is you set up 8 domains for example (to hold a byte). Hit each in turn with a url containing a flag to ask the server to respond with "HSTS enabled" to store a 1 or "HSTS disabled" to store a 0. Then later the code attempts to read those sites again without the flag and using http. The server responds indicating whether the connection was https or not and you can reconstruct your byte with that information!

      Yeah, as Google responded "defeating such fingerprinting is likely not practical without fundamental changes to how the Web works". For once I agree with Google. Gah!!!

      1. Graham Cobb Silver badge

        Re: Ahh, but reading the original article does

        I don't think defeating such fingerprinting is that hard. HSTS seems to be mainly an optimisation (although it has some security benefit if users are in the habit of typing in URLs with http: prefixes). Four things that I think should happen:

        1) Browsers should only retain HSTS info for relatively short times (I would choose less than 1 day, others might choose several days, a browser might default to a longer period -- say 1 month). This is a bit harder to make work than you might think because you need to prevent the tracker from just "refreshing" the setting each time you connect to the site.

        2) HSTS should be ignored for access from javascript.

        3) Users should be able to turn HSTS off completely.

        4) Plugins like HTTPS Everywhere should turn HSTS off completely and rely on their own capabilities.

        That would mitigate the issue for normal users and allow the most privacy conscious to eliminate it completely.

        1. Michael Wojcik Silver badge

          Re: Ahh, but reading the original article does

          it has some security benefit if users are in the habit of typing in URLs with http: prefixes

          Sigh. Not all URLs come from the user. Davidov's original piece - link's in the article - has a link to an explanation of why HSTS closes a vulnerability.

    2. Michael Wojcik Silver badge

      Re: Investigation helps not!

      No where in that is there any requirement for this information to be shared with any other domain

      And it isn't. See my explanatory posts above.

  7. cantankerous swineherd

    short story; the internet is a snake pit and shouldn't be used for anything other than cat videos.

  8. fearnothing

    So, trying to make sure I've understood this correctly - for this to uniquely identify a user you'd have to have set up and be redirecting users to a sufficient quantity of domains to uniquely identify them through different combinations of set/unset HSTS flags? Thus giving you 2^n possible identities, where n was the number of unique domains you had?

    1. sabroni Silver badge

      Yes, but the example code just uses a load of different prefixes on the same domain. Then you just use script to access each sub domain in turn using http and if the browser uses https instead then you know that bit is set.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like