back to article Swots explain how to swat CPU SNITCHES

A variety of sneaky side-channel attacks have been demonstrated over the years: from measuring the amount of processor power devoted to encryption to using an antenna to pick up stray electromagnetic emissions from computers. Myth-of-recency headlines aside, learning what a computer is doing by listening to its electronics has …

  1. James Loughner
    Big Brother

    Tin Foil is tha answer to the question

    Silly people thought everyone know tin foil block the spying

    1. Anonymous Coward
      Anonymous Coward

      Re: Tin Foil is tha answer to the question

      This is why certain Zenith pcs in the 1980's had a metal box, inside the plastic case, covering all of the components and held in place with more screws than you'd want to remove and replace by hand.

      1. Peter Gathercole Silver badge

        Re: Tin Foil is tha answer to the question @theodore

        No, that was probably to comply with the FCC emissions regulations for consumer devices in the US, which were a real problem to the early home computer manufacturers.

        Different manufacturers cam up with different solutions. Some made their computer's case out of metal. Some put full metal enclosures around the electronics inside a plastic case, and others used conductive paint sprayed onto or metal foil bonded to the inside of the plastic case.

        I believe this is the main reason why many UK manufacturers had difficulty selling their systems in the US, because our emission regulations were much less strict.

        1. Charles 9

          Re: Tin Foil is tha answer to the question @theodore

          I recall that bit of regulation. Under FCC rules, a device cannot emit EM radiation such that it interferes with another device, nor can it reflect EM energy coming from outside (it must absorb or shunt the energy, to its detriment if need be; it's part of the same rule). Thing is, while metal shields were great for EM protection (both blocking internal radiation and shunting external radiation), it also attracted heat, another Bad Thing for electronics. IINM, the Commodore 128 suffered heat issues due to its EM shield.

  2. Anonymous Coward
    Anonymous Coward

    It sounds to me a lot like the problem with network sniffing. You run into a sliding scale of anonymity vs. efficiency. IOW, the more efficient you make a network, the easier it is to use those efficiencies to sniff you out and vice versa (think straight-up Internet vs. something like TOR or Freenet). Same thing here, perhaps even more so since CPUs, particularly in mobile and other energy-sensitive applications, have an actual demand for efficiency: either to improve battery lives or to reduce costs associated with generated heat. EM sniffing now introduces a pressure at the other end of the scale producing a tradeoff and forcing people to ask which is more important since optimizing for one necessarily makes the other harder.

  3. Mike 137 Silver badge

    "...these results confirm what programmers should already know..."

    I wish I knew where these knowledgeable programmers are hiding. Most "programmers" I've met can't even create bug-free code using "flat pack assembly" dev tools.

  4. Anonymous Coward
    Anonymous Coward

    In terms of doing anything useful

    In terms of actually doing anything even vaguely useful, such as the mentioned goal of identifying the running application (never mind the data on which it is operating)...

    How much confusion can be caused by running two programs at once (as can sometimes happen) under a multitasking OS on a multiprocessor CPU?

    Personally I'm not going to waste money on tin foil to protect against this particular attack vector.

    1. Paul Crawford Silver badge

      Re: In terms of doing anything useful

      I was going to ask the same - just how useful is this in the real world?

      I can see it matters if you can get close enough to a very high value system to record the EM signatures and (presumably) have it run stuff you know to help break the stuff you don't, but for 99.999% of computer users will it matter?

      1. Anonymous Coward
        Anonymous Coward

        Re: In terms of doing anything useful

        "just how useful is this in the real world?"

        A round figure.

        The paper apparently references three different x86-derived processors, How much do these folk think the EM signature of a system depend on the processor, and how much on the rest? E.g. might there be other components in the system that influence the components (the frequency spectrum) of the RF emissions at any point in time? Might the motherboard layout affect the RF emissions in a major way? Might there also be PC components or other factors (such as disk, network, or keyboard IO) that affect the timing of those emissions over any given sequence of code being executed?

        Get a tablet, wrap it in tin foil, and stick a TEMPEST badge on it. Use USB sticks (ideally, encrypted ones) to get data in and out. That might be interesting.

        Or ignore the whole reported scenario as almost bizarrely useless (even if it is on paper technically feasible).

        1. Anonymous Coward
          Anonymous Coward

          Re: In terms of doing anything useful

          And TEMPEST certified equipment have been around for at least three decades. [I had to repair the beasts which was no fun at all what with two-man rule....] I just can see the utility of the attack here and I do know the crypto-theory side here as well. Even with an enormous budget, TAO (Tailored Access Organization operatives for the NSA) would have a far easier AND cheaper method of just bugging the damn gear than going all out on the sensing side. True, defense is always cheaper than attack but this attack vector just doesn't make engineering or economic sense.

      2. Peter Gathercole Silver badge

        Re: In terms of doing anything useful

        It strikes me that it is not feasible to do anything reasonable in real-time.

        Chances are the amount of processing to identify an instruction from this information would require a processor much faster than the one being analysed. And even if you know the instruction, you don't know the data that it is operating on.

        I suppose that if you could know the sequence of instructions used to encrypt the data, you may, in time and given enough examples of the calculation being performed, be able to reverse engineer it, but as most cryptography algorithms are available, the only thing I think you could work out is which method is being used.

        So you can hack the region coding of a DVD or Bluray player like this, but this is nothing like being able to see everything that a computer is doing by it's emissions.

    2. Anonymous Coward
      Anonymous Coward

      Re: In terms of doing anything useful

      Correction to my earlier post

      "this particular attack vector."

      should read

      " this particular alleged attack vector."

    3. Cuddles

      Re: In terms of doing anything useful

      I think at the moment the goal is not so much to actually do anything useful, but rather to find out if it might be possible to do anything useful. And especially to simply bring the matter to the attention of those who might care - essentially telling security types that there is information being transmitted via a mechanism that is not taken into account by any hardware manufacturer or programmer, and maybe they should have a think about it just in case. Maybe it will turn out to be completely impractical to see anything meaningful, but there are pretty serious consequences if it can be made to work.

    4. Michael Wojcik Silver badge

      Re: In terms of doing anything useful

      Yup. That's what people always say about side-channel vulnerabilities. Then feasible attacks are demonstrated and the people who asked whether the vulnerability was real slink away.

      We saw that happen in the mid-1990s with Kocher's timing attacks, and then again and again with other side channels.

      Now, it's true that noisy, low-entropy channels are less generally useful than great big wide ones. They have a greater work factor, which is the definition of increased security (something precious few people seem to understand). But historically even noisy, low-entropy channels have been useful for specific attacks - for example in coaxing them to probabilistically leak sensitive bits, of keys or pseudorandom state or the like, under particular conditions. Then you use that information to narrow down the search space for brute-forcing the target. Whitening helps, but a multitude of side channels makes it difficult to whiten everything.

  5. HwBoffin

    3 days ago saw an application of this attack

    In a security demo, we were able to sniff the private key of an RSA algorithm fom an ARM processor just sending clear text messages and recording the generated spectrum with an EMI probe connected to a pico scope.

    Sending 200 messages and recording the 200 samples ( fs 100MHz, so quite low) allowed us to set data into a matlab script to correlate the signals and obtain the first 24 bits of the key in less than 5 minutes in a mainstream laptop.

    Was quite impressive.

    Moreover when you can use a 1GHz sampling DSO and use something more powerful to obtain the whole key in a reasonable amount of time.

    Albert

    1. Anonymous Coward
      Anonymous Coward

      Re: 3 days ago saw an application of this attack

      How fast was the target CPU running? Can this be tested with a more modern algorithm like AES?

      PS. I can see where this is going. This could be a way to find the private key in a "black box" crypto unit where the private key never leaves the unit: a job no amount of bugging would likely accomplish.

      1. Michael Wojcik Silver badge

        Re: 3 days ago saw an application of this attack

        This could be a way to find the private key in a "black box" crypto unit where the private key never leaves the unit

        Yes, this class of attack has been used against smartcards and the like. Such black-box devices have also been subject to active attacks like the various radiation attacks.

  6. Claptrap314 Silver badge

    Garbage, useless garbage, science reporting--wow.

    I spent a decade doing first x86 then PPC validation. While I did nothing tempest-related, I was close enough to the conversation to make these points:

    1) imul and idiv are VERY different beasts. idiv is one of the slowest instructions on a cpu. While imul is often also microcoded, we're talking about maybe a dozen instructions--and it is often not microcoded at all. Show me the difference between a shift and an add, and I'll say that there is something new here.

    2) Talking about cache architecture without mentioning if the L1 is split between excutable pages and data pages is senseless. A split cache with 4 ways on each side is a completely different beast than a unified 8-way. Moreover, at least a decade ago, some chips came with partially-lockable caches. So for instance, if you were to do a fast crc, you would load the vectors and lock them, then let the data stream through the rest of the cache. Note that even without formal locking, the chance of a cache line holding the lookup data being flushed is very, very low.

    3) A much bigger issue in processor design than cache architecture for this problem is the number of threads of execution. A dual-core beast has two physically separated processors which can be (in theory) individually monitored. A dual-threaded core, on the other hand, would be almost impossible to disambiguate, as there is literally a single bit differentiating the two. Additionally, microcoded instructions (such as idiv) are usually handled very differently, as the microops can freely flow around each other.

    4) While the article does imply that the data is interesting, the data is generally everything. Until one can track down the entire instruction stream, information about which instructions execute is of limited value (and just try to differentiate between an or and an and elctronically). What easily matters is data. But what data?

    5) As other commenters have pointed out, there are a number of ways someone attempting to get at the data being processed might proceed. But there are some steps, many quite easy, to reduce the exposure to this sort of attack. I mentioned cache locking--which can actually speed things up. I also mentioned mult-threaded processors--which definitely speed things up. There are data lossless processors (slow, yes). There is branchless coding, in which values down both branches are computed, then one is masked with 0, the other with -1, and the results or'ed. There is suprious coding, in which functional nops are inserted which activate various systems. And I consider myself to know nothing of the art.

    So, yes, if you ever listened to your TRS-80 on the radio, you know that it is possible to extract information about what a processor is doing from its emmissions. But extracting actionable information from a processor designed and used to mask such data is a different matter entirely. It is far from clear from this story that this research actually advanced this art.

    1. Michael Wojcik Silver badge

      Decades of security research, including numerous successful side-channel attacks, say you don't know what you're talking about. But thanks for playing.

  7. James Loughner
    Boffin

    Ha

    I have an Altair which I had a neat little program to cause a nearby radio to emit Daisy. (look it up in Dr Dobbs Journal Running light with out over byte). Had to toggle it in by the front panel. Ha you youngsters don't know how easy you have it

  8. Michael Wojcik Silver badge

    At least a decade

    Yes, security researchers have been looking at side-channel attacks for at least a decade, for very large values of "at least".

    Paul Kocher's successful demonstration of timing attacks was 1996.

    Wim van Eck's proof-of-concept for EMI interception of a display signal was 1985.

    National intelligence services were doing acoustical attacks on relay-based encryption machines in the mid-1950s.

    So "at least six decades" would be a fair and more precise statement.

    But hey - don't let me stop any of the Reg's resident genius commentators from explaining why this latest bit of research is pie-in-the-sky nonsense with no practical application.

  9. PARC

    http://forums.theregister.co.uk/forum/1/2015/01/23/we_know_computers_leak_signals_to_attackers_but_how_much/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like