back to article How to (slowly) steal secrets over the network from chip security holes: NetSpectre summoned

Computer security researchers have devised a way to exploit the speculative-execution design flaws in modern processor chips over a network connection – a possibility that sounds rather more serious but may be something less than that. Until now, Spectre attacks have required malicious code to be running on a vulnerable …

  1. NoneSuch Silver badge
    FAIL

    Yup

    No nation state would ever take advantage of a flaw that took days. Ultra slow speed to lose the tiny packets in the mass of other traffic. Makes no sense at all.

    1. Nate Amsden

      Re: Yup

      if a nation state is after you this is the least of your worries.

    2. DrBed

      Re: Yup

      "No nation state would ever take advantage of a flaw that took days. Ultra slow speed to lose the tiny packets in the mass of other traffic. Makes no sense at all."

      Same thing could been said for Stuxnet - before awareness of damage at Natanz nuclear facility (Iran, 2010; Stuxnet have been in development since at least 2005).

      https://en.wikipedia.org/wiki/Zero_Days

      e.g. NSA with a little help of some neural network AI could override that "slow speed/tiny packets" limitation.

      from Ars Technica: "These data rates are far too slow to extract any significant amount of data; even the fastest side channel (AVX2 over the local network) would take about 15 years to read 1MB of data."

      Indeed, but: only one, simple but crucial recovered password is sufficient for penetration, at least in theory. e.g. 1KB instead 1MB or 1:1000. Well trained AI will reconstruct it from bits of slurped garbage much more faster / successful then one could imagine.

      "The AVX2 side channel is much faster—one byte every eight minutes—but still very slow."

      hmm... 1B@8' == 1000B@8000' = 1KB@332d-12h-20min ==> worth trying for right thing, imho.

    3. Pascal Monett Silver badge
      Big Brother

      Re: Yup

      And that is why you are not the Director of the NSA.

      1. Anonymous Coward
        Anonymous Coward

        Re: Yup

        Are you really unable to pick up on the sarcasm in their post ?

        1. Michael Wojcik Silver badge

          Re: Yup

          Are you really unable to pick up on the sarcasm in their post ?

          Poe's Law. Sarcasm needs to be a hell of a lot clearer than in the original post - if the original was meant sarcastically; it's not at all clear to me that it was.

          Write for your audience if you want to be understood.

    4. druck Silver badge

      Re: Yup

      It makes lots of sense. Attacks against large corporations can go on for months without detection, which is plenty of time to exfiltrate data at even a few bits per hour.

  2. Robert Helpmann?? Silver badge
    Paris Hilton

    I don't think that word means what you think it means

    I am quite sure I don't understand all of this, but perhaps someone could fill me in. A Spectre gadget as it is not particularly well-defined in the article or at least I was a bit thrown off. It isn't one of the gadgets in the "billions of computers, gadgets, and gizmos at some degree of risk". Does it amount to any code in any remote API that can be abused to exfiltrate data using this method? If so, I would think that identifying them might be accomplished by defining normal, expected calls on each API and monitoring for any that fall outside that set, essentially what most whitelisting apps do during tuning. Easier said than done, I am sure, but perhaps a way to catch things that code review might miss.

    1. really_adf

      Re: I don't think that word means what you think it means

      Does [a Spectre Gadget] amount to any code in any remote API that can be abused to exfiltrate data using this method?

      Yes, that is my understanding.

      If so, I would think that identifying them might be accomplished by defining normal, expected calls on each API and monitoring for any that fall outside that set, ...

      Unfortunately, that monitoring may itself be a Spectre Gadget.

    2. Michael Wojcik Silver badge

      Re: I don't think that word means what you think it means

      Technically, a gadget is a vulnerable code pattern. This use is a common term of art in malware research. Offhand my impression is that it was popularized by discussions of ROP, but I may be misremembering.

      The paper discusses some of the difficulties in identifying known gadgets, and much of the Spectre research, and other research into microarchitecture side channels, has focused on identifying new types of gadgets. It's tricky to detect unknown patterns.

      That said, it's possible application behavior monitoring and analysis could be a useful mitigation for some microarchitecture side-channel attacks. It wouldn't catch all of them, but it could contribute to defense in depth. Essentially it's what most contemporary anti-malware products do already, just for a different class of suspect patterns.

  3. Bronek Kozicki Silver badge

    There is also

    ... an article on the subject on Ars Technica.

  4. Simon Ward

    Of course Intel isn't worried ..

    After all, until the exploit has a flashy logo and associated website it doesn't really exist - coming up with something 'cooler' sounding than NetSpectre may or may not happen.

    That certainly seems to be standard operating procedure in the current shitegeist.

  5. leadyrob

    [In]Spectre Gadget

    Was Dr Claw a co-author of the paper ?

    1. Michael Wojcik Silver badge

      Re: [In]Spectre Gadget

      Mitigation: Penny

  6. Anonymous Coward
    Anonymous Coward

    Are we there yet

    So are there new chips yet available from AMD/Intel/ARM that are 100% Spectre free yet, oh and Meltdown free for Intel chips ? Basically is "Spectre free" part of their promotion material.

    1. Claptrap314 Silver badge

      Re: Are we there yet

      "We will get there when we get there!"

      As I said before, I expect two years or so remain before we see commercial consumer product. Which will be a significant step backward from price/performance/power standpoint.

    2. Michael Wojcik Silver badge

      Re: Are we there yet

      Sure. And they'll come with a free unicorn.

      Expect CPUs with no microarchitecture side channels around the same time you see non-trivial consumer software with no bugs.

  7. elvisimprsntr

    It's a conspiracy folks! Planned obsolesce!

    We already know CPUs are reaching the end of Moore's Law. This will ultimately lead to a decline in sales when all you get is incremental performance increases. It has likely already begun if you believe some of the YoY sales figures. Intel (and others) know these issues exist and let them trickle out to guarantee they will get sufficient press coverage to scare the $hit out of everyone. Future Intel comes to the rescue to save humanity by announcing a new line of hardened processors, future OS distributions require new hardened processors. Profits soar! Everyone wins! Well, except for the consumer and business that are forced to upgrade all computers, severs, networking gear, and anything else with a processor.

  8. Simon Blakely

    Given that this is a sidechannel attack on network response via a SPECTRE gadget, the logical defense is to make all network application responses constant-time. So pick the longest possible response time, and force all the network responses to wait that long.

    Or just add some random jitter (possibly about the average of half the difference between a speculative and non-speculative lookup) into the response time at the network packet driver- you will increase the average network response time by some small figure,but you destroy the network side-channel.

    Any resolution to SPECTRE class sidechannels means impacting performance - the only question is the cost to do so, and whether that cost is acceptable.

  9. ATeal

    RE: Add random jitter - IT WONT WORK

    That wont work. It'll slow it down, but it wont work.

    I have so much measure theory in here I shit thick books on various types and call it "statistical physics" so I'm going to dumb this down - internet pedants have fun!

    When you take basically any "random variables" (eg uniform, whatever) and you add them it's very very difficult to NOT get a normal distribution! You then times this by 1/n - which basically isn't an RV (you're considering this for a given n) it's a number - and we're talking average.

    That's basically why this is so slow anyway. Seeing two values very close together (compared to their variance) requires a large sample to tell apart.

    Consider flipping a coin; When do you call it biased? 20/20 heads - sure, biased (ALMOST certainly, you could be REALLY lucky) but 16/20? Nah that's not, well what if you wanted to detect a coin with 50.0001% chance for heads? You'd need A LOT of samples

    Same principle.

    The fixes I've heard about involve the information being lost in the truncation of a time value. That is say you're trying to detect (for the sake of example) 1 ms (because I hate writing 1us) and you have a clock with 100ms accuracy you need to somehow do your thing on 199ms and hope that you can tell say 199ms apart from 200ms - if the clock only does this it does raise the barrier to waiting 199ms but you can still find a boundary, eg a for loop doing nothing which you time, you find for n=whatever it takes 100ms by this clock, for n=whatever+1 it takes 200ms - you're straddling the boundary (this is further fuzzed by real time differences (any microbenchers in the house?!?)

    (Hopefully you understand how randomly "jittering" the clock now isn't an absolute fix, eg if 50.0001% of the time you get a 200ms reading, and 49.9999% of the time you get 300ms reading, with enough samples this can be determined to be arbitrary likely, that is you can pick a number, say 0.001% - and get enough samples to be sure to that probability, that the result you're seeing means you're in the super lucky 0.001% - OR it's not random and there's a distinction here.

    You get the idea.

    It's an extremely tricky issue -

    EDIT: I use "almost certainly" in the English sense - inb4 - also you wanna round your clock instead? Find the odd multiples of 50ms to split instead!

    1. Claptrap314 Silver badge

      Re: RE: Add random jitter - IT WONT WORK

      This. I get so sick of "just limit clock resolution" or "just add jitter" "solutions" piping up. I actually despaired of trying to explain it here. Thanks.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019