back to article Intel SGX 'safe' room easily trashed by white-hat hacking marauders: Enclave malware demo'd

Security researchers have found that Intel's Software Guard Extensions (SGX) don't live up to their name. In fact, we're told, they can be used to hide pieces of malware that silently masquerade as normal applications. SGX is a set of processor instructions and features for creating a secure enclave in which code can be …

  1. Anonymous Coward
    Anonymous Coward

    might be hard to accomplish on linux

    tsx is disabled through microcode and sgx support is being dropped entirely.

  2. Charles 9

    Until someone finds a way to put it back in and turn it back on.

    I'm waiting for a formal, Turing-like proof to this supposition: "Anything that can operate outside the context of a Turing machine can usurp said machine."

    1. DavCrav

      "Anything that can operate outside the context of a Turing machine can usurp said machine."

      If the operation allows it to modify the instructions of the Turing machine then obviously, just by replacing the instructions with a different set. If the operation allows it to modify the input tape of the Turing machine then the same statement can hold in a strict sense: print your program a long way away from the input on the Turing machine and the machine won't see it, so it won't be erased or run. Otherwise it's an obvious no for an arbitrary machine: the machine that does nothing at all is a perfectly good machine, and cannot be hacked.

      If you want to force the Turing machine to run arbitrary code, then we'll need a universal Turing machine for that, rather than a Turing machine. Then if you modify the tape, of course you are modifying the program as run into the machine's memory, so again you can do whatever you want.

      But if you leave the program alone and just alter the input then no, you cannot force an arbitrary Turing machine to do your bidding.

    2. Blazde Silver badge

      If the interface between any security domains is well enough constrained then it's secure. The central problem is that if you can gain even a few bits of flexibility over unintended control flow then it's highly likely you can leverage it to usurp the Turing machine in a universal way, and run whatever you like. All the ASLR/stack canary/etc technology complicates the task greatly but raises the theoretical bar hardly at all.

      Almost all Turing machines are universal, yet when we create a secure system we're trying to find a (usually very) complex machine which is *not* universal. It's an almost unimaginably impossible task.

  3. Pascal Monett Silver badge

    "performing anti-piracy decryption of protected Hollywood movies"

    Weird just how much that actually sounds like pirating them.

    In any case, I rather agree with Intel on this one : know whose code you're running. Don't run code from just anywhere, just like you don't click on a link in a mail from someone you've never met before.

    1. MattUK

      Re: "performing anti-piracy decryption of protected Hollywood movies"

      I wish someone had told my old boss that before I worked for myself, that idiot seemed to click on every link in email, on porn sites, any popups he could find. He was the only person in the company without malware protection/anti-virus/internet blacklisting and surprisingly he was the only person who continually had infections.

    2. DropBear
      WTF?

      Re: "performing anti-piracy decryption of protected Hollywood movies"

      If all code running on the machine is trusted, a protected enclave is pointless. As soon as we assume there is something on that machine that something else need protection from, your whole premise of running only trusted code is falsified. Which is just as well, as in practice there is no such thing as 100% trustworthy code.

    3. Michael Wojcik Silver badge

      Re: "performing anti-piracy decryption of protected Hollywood movies"

      Sure, it's easy to satisfy a threat model by adopting criteria that can't be met in practice.

      I've yet to see a reasonable threat model under which SGX provides anything useful. That's the point of this research. Telling people to strive for some impossible level of perfect vigilance isn't a mitigation; it's dodging the issue.

  4. MJB7
    Big Brother

    Untrusted code

    OK, hands up. Who here trusts code from Sony (for example)?

    1. Fatman

      Re: Untrusted code

      <quote>OK, hands up. Who here trusts code from Sony (for example)?</quote>

      Not me for one.

      I played one of their rootkit infected CD's on my personal development computer; and lost everything.

      Fortunately, I had a recent backup,

      I promised myself - never DO THAT[1] again.

      [1] Use a development system for entertainment purposes.

      1. Anonymous Coward
        Anonymous Coward

        Re: Untrusted code

        "I played one of their rootkit infected CD's on my personal development computer; and lost everything."

        How, it was a rootkit, not a format of C: drive?

        1. Craig 2

          Re: Untrusted code

          "How, it was a rootkit, not a format of C: drive?"

          I'm not saying that people don't exaggerate in stories such as this, but that rootkit (like most) had several vulnerabilities and bugs. This had the potential to cause system crashes which in turn can cause data corruption and so on... VERY, VERY unlucky to lose everything, but not impossible.

        2. Anonymous Coward
          Anonymous Coward

          Re: Untrusted code

          "How, it was a rootkit, not a format of C: drive?"

          I'm probably confusing Sony's with another, but 1 of their rootkits did delete somebody's entire system because d:/ wasn't the CDROM but the system drive... or some shit like that. It's been years since I read this, but it was in an article here I believe (or legacy slashdot).

  5. jms222

    > tsx is disabled through microcode and sgx support is being dropped entirely.

    No TSX was disabled on Haswell as the newly-implemented feature screwed up occasionally creating lock inconsistencies. (Disabled in this case meaning fall back to older slower but safe behaviour.) But in principle it's a good idea. Other architectures have had something similar for a while but Intel is playing catch-up again. But Intel isn't as far behind as with NX. You'll find writable non-executable sections in other architectures decades ago.

  6. Will Godfrey Silver badge
    Unhappy

    Ironic

    It seems that adding protection code simply increases the attack surface. Maybe we need a return to ROM-based BIOS, then call it a day.

    1. Charles 9

      Re: Ironic

      But what if the ROM has an exploit? Then you can't fix it.

  7. Anonymous Coward
    Anonymous Coward

    Hidden processing on a CPU?

    What could possibly go wrong...

    1. emmanuel goldstein

      Re: Hidden processing on a CPU?

      Word.

      1. DCFusor
        Joke

        Re: Hidden processing on a CPU?

        Reminds one from Scotty in "The search for Spock".

        Scotty: The more they overthink the plumbing, the easier it is to stop up the drain.

        http://quotegeek.com/quotes-from-movies/star-trek-iii-the-search-for/6800/

      2. red floyd

        Re: Hidden processing on a CPU?

        I think you mean PowerPoint.

  8. cutterman

    The British (with a lot of help from the Poles and the French) and later the Americans cracked Enigma and most of the important Japanese diplomatic and military cypher/codes. Later in WW2 they had the help of crude analog computers that speeded up the process.

    They proceeded from the premise that there was human readable sensible information in those endless series of 4 or 6 letter groups. Their task was much facilitated by operator errors - sending the same message in different codes/ciphers, using the same code pages on subsequent days, repeated phrases like, "Your Excellency" and so on and so on. Given time and enough data all codes/ciphers can be cracked - except for proper "one-time-pad" codes.

    But then, how random is random? I have dozens of ways of producing pseudorandom numbers (best to start with a hardware RNG and then subject it to cycles of PRNG). A method of generating (AND conveying it securely to the recipient) genuinely random numbers is not easy. Enforcing the correct use of these numbers is virtually impossible.

    But whatever you do there MUST be entropy in the message - given enough messages, enough knowledge of your adversary, the type of data likely to be communicated and enough time (and speed increases daily) that entropy is theoretically discoverable.

    cutterman

  9. jbrickley

    i don't think that word means what you think it means

    Hey, Intel "Secure Enclave" - i don't think that word means what you think it means!

    Curious if Apple's T2 Security Chip that includes the same Secure Enclave used in all iOS devices is vulnerable to this or not. The T2 is an ARM based CPU that runs a custom BridgeOS that only Apple controls. Once data is written to the Secure Enclave it's inaccessible. You can only get a Yeah or Nay response when sending a public key or biometric key challenge to the Secure Enclave.

  10. Doctor Syntax Silver badge

    "research which is based upon assumptions that are outside the threat model for Intel SGX."

    I think this is saying it doesn't conform to the assumptions Intel made. Be careful about assumptions that you make in designing or implementing something. Your assumptions will become the product's limitations. If you're not aware of them they may become its bugs.

  11. Justthefacts Silver badge

    Alternative security measures

    It seems that (again) the more complex security measures increase the attack surface.

    Genuine question - why don’t people implement other architectures that are easier to security analyse? I’m thinking Harvard instead of Von Neumann. It doesn’t even have to be that different from a physical standpoint, the program/data could be shared and enforced by on-chip hardware engine. I doubt the instruction unit core even needs to change, just the memory addressing units.

    Or, why not at least separate out data and return-address stacks into small separate on-chip RAMs. 2x256kB RAM for stack could hardly break the bank nowadays, might well be faster than current architecture than what is anyway a rather bastardised idea putting stack in main memory, kills the ROP stuff and stack overflow. I could be wrong but if the stack is larger than L1 cache, I reckon you are both “holding it wrong” and killing performance.

    There may be problems wot I haven’t thought of.

    1. Justthefacts Silver badge

      Re: Alternative security measures

      I should probably clarify, to show at least “incomplete” ignorance.

      I do know that Linux today allows 8MB stack per thread, which blows way past 256k total stack....

      My point is - OS design encodes a set of decision making based on a history of CPU microarchitecure, particularly Intel and AMD. This looks “cost-free” on current architectures with stack mapped into main memory, but it is turning out to be very costly from a security perspective.

      Are such large stack sizes really needed, or are we just encouraging greedy developer practices. Being controversial (and I know there are people who *love* recursion), if I discovered my team were writing something that required even 1M stack, I would be really worried. If 1M, what are the edge cases that it needs 2M? Or 20M? It seems to me like an accident waiting to happen, without fairly severe numerical analysis, especially when an unrelated team adds an unrelated feature in object-oriented fashion two years later.

      If instead, we prioritised security, and accepted hardware limitations on stack-size. Trade-off, on the one hand a majority of metal-level security issues are prevented. Maintenance becomes much cheaper downstream. On the other hand, Linux and Win have to change thread stack-size, and a bunch of legacy applications containing bloated stack assumptions need to be re-spun. Would there be a net benefit for the industry in theory, even if it difficult to get there from here.

      1. Anonymous Coward
        Anonymous Coward

        Re: Alternative security measures

        1MB doesn't sound like much until you start routinely running into programs that use more than 1GB of memory at a time; some are even going into 64-bit-only territory by being more than 4GB in size (physically overflowing the 32-bit limit). So what do you do? Have it there and not need it...or need it and not have it to hand?

        1. Justthefacts Silver badge

          Re: Alternative security measures

          I get your point that more complex programs have likely larger stack requirements; I’ve never been involved on something that size, so I haven’t seen the problems.

          One answer is to separate stack into return-address and data, and keep return address only in dedIcated onboard. “Surely” that can’t overrun.

          I still think that our general problem is that we have sized our compute infrastructure on “must be able to do everything” rather than “securely compute typical things, and refactor our previously unconstrained solutions”.

          I don’t have all the answers, not surprisingly....like how to enforce code/data separation for interpreted code like Java

  12. John Savard

    Advice That's Hard to Follow

    Not running code from untrusted sources. Sounds like good advice.

    But when I install a program on Windows, I don't get notified, when that program is a DVD player, say, that the program needs to use the SGX feature, so would I please give it permission to do so.

    So how do I install programs that I trust to use the computer conventionally, but which have no need to access this feature?

  13. DrM
    Boffin

    Boot?

    It's a bit like carjacking someone using the tire iron in the vehicle's trunk (or boot for our UK readers).

    You have a boot in your trunk in the UK?

    1. Brenda McViking
      Holmes

      Re: Boot?

      And I was about to ask whether you had a trunk in your boot, or whether it was a pair of trunks, or whether it was the trunk of an elephant or a tree. The first makes rather good sense albeit rather old fashioned to store things in wooden boxes nowadays, the second is fine if you're a man going swimming, and PETA are going to want to know if it's the latter so that they can hang, draw and quarter you before feeding you to rabid mob of ravenous vegans.

    2. Anonymous Coward
      Anonymous Coward

      Re: Boot?

      DrM.

      "You have a boot in your trunk in the UK?"

      No.

      But it is possible to have a 'Trunk' in the 'Boot'* !!! :)

      *The Trunk of a car is called the 'Boot' of the car in the UK.

  14. Michael Wojcik Silver badge

    age-old?

    the age-old technique of return-oriented programming

    Er ... if we allow the old return-to-libc exploits which were the theoretical ancestors of modern ROP, it dates back to, what, 1997? That's the date of Solar Designer's BUGTRAQ post on the topic. Previous well-known stack-overflow attacks such as the Morris Worm and Aleph Null's examples from "Smashing the Stack" all used injected code, as far as I remember.

    Public research on modern ROP started to appear around 2005. It's not even old enough to drive yet.

    Maybe that's old by skiddie standards, but surely the Reg has a longer memory. Plenty of the commentariat do.

  15. NonSSL-Login

    New chipset security features always mean deeper embedded exploits

    If you look at every new security based addition to intel chipsets as a gift of another backdoor for the NSA, it all makes sense. Every time.

    Hollywood getting anti-piracy stuff built in to chipsets angers me. The things I cannot do with my HDMI out cable that I want to because of the MPAA's Sony being involved in the HDMI standard and inflicting DRM through HDCP is another one of these annoyances. The sooner we eject the media cartels influence form hardware the better.

  16. desert-dog

    Trusted code

    I want to know how I can have trusted code on a cloud.

    1. Charles 9

      Re: Trusted code

      How about this? How can you trust whoever makes the code trusted? And can you even trust yourself to get things right?

  17. R 15
    FAIL

    Windows XP - "The best Windows ever!"

    Windows Vista - "The best Windows ever!"

    Windows 8 - "The best Windows ever!"

    Windows 10 - "The best Windows ever!"

    but... we're still downloading multi-GB updates to patch every release they foist on us.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like