might be hard to accomplish on linux
tsx is disabled through microcode and sgx support is being dropped entirely.
Security researchers have found that Intel's Software Guard Extensions (SGX) don't live up to their name. In fact, we're told, they can be used to hide pieces of malware that silently masquerade as normal applications. SGX is a set of processor instructions and features for creating a secure enclave in which code can be …
"Anything that can operate outside the context of a Turing machine can usurp said machine."
If the operation allows it to modify the instructions of the Turing machine then obviously, just by replacing the instructions with a different set. If the operation allows it to modify the input tape of the Turing machine then the same statement can hold in a strict sense: print your program a long way away from the input on the Turing machine and the machine won't see it, so it won't be erased or run. Otherwise it's an obvious no for an arbitrary machine: the machine that does nothing at all is a perfectly good machine, and cannot be hacked.
If you want to force the Turing machine to run arbitrary code, then we'll need a universal Turing machine for that, rather than a Turing machine. Then if you modify the tape, of course you are modifying the program as run into the machine's memory, so again you can do whatever you want.
But if you leave the program alone and just alter the input then no, you cannot force an arbitrary Turing machine to do your bidding.
If the interface between any security domains is well enough constrained then it's secure. The central problem is that if you can gain even a few bits of flexibility over unintended control flow then it's highly likely you can leverage it to usurp the Turing machine in a universal way, and run whatever you like. All the ASLR/stack canary/etc technology complicates the task greatly but raises the theoretical bar hardly at all.
Almost all Turing machines are universal, yet when we create a secure system we're trying to find a (usually very) complex machine which is *not* universal. It's an almost unimaginably impossible task.
Weird just how much that actually sounds like pirating them.
In any case, I rather agree with Intel on this one : know whose code you're running. Don't run code from just anywhere, just like you don't click on a link in a mail from someone you've never met before.
I wish someone had told my old boss that before I worked for myself, that idiot seemed to click on every link in email, on porn sites, any popups he could find. He was the only person in the company without malware protection/anti-virus/internet blacklisting and surprisingly he was the only person who continually had infections.
If all code running on the machine is trusted, a protected enclave is pointless. As soon as we assume there is something on that machine that something else need protection from, your whole premise of running only trusted code is falsified. Which is just as well, as in practice there is no such thing as 100% trustworthy code.
Sure, it's easy to satisfy a threat model by adopting criteria that can't be met in practice.
I've yet to see a reasonable threat model under which SGX provides anything useful. That's the point of this research. Telling people to strive for some impossible level of perfect vigilance isn't a mitigation; it's dodging the issue.
<quote>OK, hands up. Who here trusts code from Sony (for example)?</quote>
Not me for one.
I played one of their rootkit infected CD's on my personal development computer; and lost everything.
Fortunately, I had a recent backup,
I promised myself - never DO THAT again.
 Use a development system for entertainment purposes.
"How, it was a rootkit, not a format of C: drive?"
I'm not saying that people don't exaggerate in stories such as this, but that rootkit (like most) had several vulnerabilities and bugs. This had the potential to cause system crashes which in turn can cause data corruption and so on... VERY, VERY unlucky to lose everything, but not impossible.
"How, it was a rootkit, not a format of C: drive?"
I'm probably confusing Sony's with another, but 1 of their rootkits did delete somebody's entire system because d:/ wasn't the CDROM but the system drive... or some shit like that. It's been years since I read this, but it was in an article here I believe (or legacy slashdot).
> tsx is disabled through microcode and sgx support is being dropped entirely.
No TSX was disabled on Haswell as the newly-implemented feature screwed up occasionally creating lock inconsistencies. (Disabled in this case meaning fall back to older slower but safe behaviour.) But in principle it's a good idea. Other architectures have had something similar for a while but Intel is playing catch-up again. But Intel isn't as far behind as with NX. You'll find writable non-executable sections in other architectures decades ago.
The British (with a lot of help from the Poles and the French) and later the Americans cracked Enigma and most of the important Japanese diplomatic and military cypher/codes. Later in WW2 they had the help of crude analog computers that speeded up the process.
They proceeded from the premise that there was human readable sensible information in those endless series of 4 or 6 letter groups. Their task was much facilitated by operator errors - sending the same message in different codes/ciphers, using the same code pages on subsequent days, repeated phrases like, "Your Excellency" and so on and so on. Given time and enough data all codes/ciphers can be cracked - except for proper "one-time-pad" codes.
But then, how random is random? I have dozens of ways of producing pseudorandom numbers (best to start with a hardware RNG and then subject it to cycles of PRNG). A method of generating (AND conveying it securely to the recipient) genuinely random numbers is not easy. Enforcing the correct use of these numbers is virtually impossible.
But whatever you do there MUST be entropy in the message - given enough messages, enough knowledge of your adversary, the type of data likely to be communicated and enough time (and speed increases daily) that entropy is theoretically discoverable.
Hey, Intel "Secure Enclave" - i don't think that word means what you think it means!
Curious if Apple's T2 Security Chip that includes the same Secure Enclave used in all iOS devices is vulnerable to this or not. The T2 is an ARM based CPU that runs a custom BridgeOS that only Apple controls. Once data is written to the Secure Enclave it's inaccessible. You can only get a Yeah or Nay response when sending a public key or biometric key challenge to the Secure Enclave.
"research which is based upon assumptions that are outside the threat model for Intel SGX."
I think this is saying it doesn't conform to the assumptions Intel made. Be careful about assumptions that you make in designing or implementing something. Your assumptions will become the product's limitations. If you're not aware of them they may become its bugs.
It seems that (again) the more complex security measures increase the attack surface.
Genuine question - why don’t people implement other architectures that are easier to security analyse? I’m thinking Harvard instead of Von Neumann. It doesn’t even have to be that different from a physical standpoint, the program/data could be shared and enforced by on-chip hardware engine. I doubt the instruction unit core even needs to change, just the memory addressing units.
Or, why not at least separate out data and return-address stacks into small separate on-chip RAMs. 2x256kB RAM for stack could hardly break the bank nowadays, might well be faster than current architecture than what is anyway a rather bastardised idea putting stack in main memory, kills the ROP stuff and stack overflow. I could be wrong but if the stack is larger than L1 cache, I reckon you are both “holding it wrong” and killing performance.
There may be problems wot I haven’t thought of.
I should probably clarify, to show at least “incomplete” ignorance.
I do know that Linux today allows 8MB stack per thread, which blows way past 256k total stack....
My point is - OS design encodes a set of decision making based on a history of CPU microarchitecure, particularly Intel and AMD. This looks “cost-free” on current architectures with stack mapped into main memory, but it is turning out to be very costly from a security perspective.
Are such large stack sizes really needed, or are we just encouraging greedy developer practices. Being controversial (and I know there are people who *love* recursion), if I discovered my team were writing something that required even 1M stack, I would be really worried. If 1M, what are the edge cases that it needs 2M? Or 20M? It seems to me like an accident waiting to happen, without fairly severe numerical analysis, especially when an unrelated team adds an unrelated feature in object-oriented fashion two years later.
If instead, we prioritised security, and accepted hardware limitations on stack-size. Trade-off, on the one hand a majority of metal-level security issues are prevented. Maintenance becomes much cheaper downstream. On the other hand, Linux and Win have to change thread stack-size, and a bunch of legacy applications containing bloated stack assumptions need to be re-spun. Would there be a net benefit for the industry in theory, even if it difficult to get there from here.
1MB doesn't sound like much until you start routinely running into programs that use more than 1GB of memory at a time; some are even going into 64-bit-only territory by being more than 4GB in size (physically overflowing the 32-bit limit). So what do you do? Have it there and not need it...or need it and not have it to hand?
I get your point that more complex programs have likely larger stack requirements; I’ve never been involved on something that size, so I haven’t seen the problems.
One answer is to separate stack into return-address and data, and keep return address only in dedIcated onboard. “Surely” that can’t overrun.
I still think that our general problem is that we have sized our compute infrastructure on “must be able to do everything” rather than “securely compute typical things, and refactor our previously unconstrained solutions”.
I don’t have all the answers, not surprisingly....like how to enforce code/data separation for interpreted code like Java
Not running code from untrusted sources. Sounds like good advice.
But when I install a program on Windows, I don't get notified, when that program is a DVD player, say, that the program needs to use the SGX feature, so would I please give it permission to do so.
So how do I install programs that I trust to use the computer conventionally, but which have no need to access this feature?
And I was about to ask whether you had a trunk in your boot, or whether it was a pair of trunks, or whether it was the trunk of an elephant or a tree. The first makes rather good sense albeit rather old fashioned to store things in wooden boxes nowadays, the second is fine if you're a man going swimming, and PETA are going to want to know if it's the latter so that they can hang, draw and quarter you before feeding you to rabid mob of ravenous vegans.
the age-old technique of return-oriented programming
Er ... if we allow the old return-to-libc exploits which were the theoretical ancestors of modern ROP, it dates back to, what, 1997? That's the date of Solar Designer's BUGTRAQ post on the topic. Previous well-known stack-overflow attacks such as the Morris Worm and Aleph Null's examples from "Smashing the Stack" all used injected code, as far as I remember.
Public research on modern ROP started to appear around 2005. It's not even old enough to drive yet.
Maybe that's old by skiddie standards, but surely the Reg has a longer memory. Plenty of the commentariat do.
If you look at every new security based addition to intel chipsets as a gift of another backdoor for the NSA, it all makes sense. Every time.
Hollywood getting anti-piracy stuff built in to chipsets angers me. The things I cannot do with my HDMI out cable that I want to because of the MPAA's Sony being involved in the HDMI standard and inflicting DRM through HDCP is another one of these annoyances. The sooner we eject the media cartels influence form hardware the better.
Biting the hand that feeds IT © 1998–2019