Scientists have devised a chip design to ensure microprocessors haven't been surreptitiously equipped with malicious backdoors that could be used to siphon sensitive information or receive instructions from adversaries. The on-chip engines at the heart of these "tamper evident microprocessors" are the computer equivalent of …
I'm not sure why an adversary capable of altering cpu hard/firmware would be unable to also alter the protection mechanisms, but assuming the technology is somehow foolproof, it is interesting.
It seems to me that an advanced attacker would be more likely to alter the system bios, which is more powerful, easier to do, and require no special hardware. Code under System Management Mode isn't even visible to the OS.
The security model for many organizations simply breaks down at a hardware level as they have no way to verify the soundness of the hardware they acquire. Our vendors can tell us it is safe, but in reality we simply have to blindly trust the hardware does only what it's supposed to do. I don't see this development changing the status quo.
...if implemented purely and honestly as-presented. Elsewise, could not any given New Microprocessor Series be perforce equipped, per Bad TerrorGummint Fiat, with a permanent Official Backdoor that could not be readily closed at all, even by a fully qualified White Hat o'Human Liberty?
White Hat'd become fiat'd straight into being Black Hat right quick, one expects, in any such instance. Vulture o'Doom, kindly *do* keep a Vulture's Eye on this item! Kindly do not accept any Kool-Aid® while this New and Likely Abusable Thingie works its way into the server clusters and desktop terminals of the Physical World? (THANKS for so doing if ever things do so go. Counting on you.)
Illegitemii non carborundum, same as it ever was. ;)
About that root of trust thing...
>"The root of trust in all software systems rests on microprocessors because all software is executed by a microprocessor"
Well yeah, but then again, the reason nobody's yet bothered to actually backdoor a CPU in practice is because we can't trust the software, or the environment, or the network, or the users, or in fact anything at all, so it hasn't been worth it yet. This is just the same signature-based blacklisting approach that the virus and rootkit arms race has so comprehensively proven can never win. AV is a win in general despite that because it's still worth preventing the 99% of infections even if a targeted one is going to sneak under the radar anyway, but in this area you're trying to detect just a single targeted attack, and it only has to get past the radar once in order to end up in every chip that you end up printing from the design; in those circumstances, it's always going to be a customized attack every time anyway, and this just locks down a couple of possible approaches to backdooring a chip without doing anything to address the million others.
I read the PDF and I guess this would be something that would be installed in-house after carefully verifying that you didn't get a compromised copy of TrustNet or DataWatch. But then maybe the bad guys find a way to intercept and keep those happy then you have to make "tamper evident" "tamper evident" CPUs. Which will have to be carefully verified before being installed but then....
A clever covert channel thrills me like a Victoria Secrets model. Thanks for the article.
The obvous way to steal an election
The obvious way to steal an election where computerized voting terminals are used is to design into the hardware a backdoor allowing insiders to take control of the machine and alter the code after the real election has started and pre-election verification has been completed. the best place to do this would be in the microprocessor although it could also be implemented by circuitry external to the microprocessor on the CPU board.
The Brad Blog archives contain many articles on the possibity of hacking US elections and the possibility that some elections may already have been stolen.
The obvous way to steal an election
The obvious way to steal an election where computerized voting terminals are used is to design into the hardware a backdoor allowing insiders to take control of the machine and alter the code after the real election has started and pre-election verification has been completed. The best place to do this would be in the microprocessor although it could also be implemented by circuitry external to the microprocessor on the CPU board.
The Brad Blog (http://www.bradblog.com) archives contain many articles on the possibility of hacking US elections and the possibility that some elections may already have been stolen.
The obvious way is to identify areas where the vote is close and then exploit the weak process controls to stuff the ballot / cast more votes for your candidate.
Surely the baddies would test their evilware to make sure it is not detected by the anti-tamper circuitry. And if it reports anything, they would simply alter the anti-tamper stuff to silence it.
As a consumer I have no way to check whether my "secured" hardware actually does contain these features or whether those features are in fact secure and don't actually add a backdoor where previously there wasn't one. The concern is with a rogue design team adding microcode but what is to stop it being added by the TN/DW team under Government or other influence?
This looks like creative use of statistics to sell this processor.
i.e. @"reported that at least five percent of the global electronics supply chain includes counterfeit elements that could "cause critical failure or can put an individual's data at risk,""
Lets separate point (A) "cause critical failure" from "can put an individual's data at risk". Also what exactly do they mean by "put an individual's data at risk". If the risk is data loss as in data corruption, then thats one thing, but this processor is designed to stop tampering at the microcode level to stop malicious hacking style access of the data. So lets split this into point (B) "data corruption" and point (C) "hacking the CPU".
So what percentage of "global electronics supply chain" counterfeits can compromise, (A), (B) or (C), i.e.
(A) "cause critical failure"
(B) Data corruption making data unusable
(C) Hacking the CPU
Ok, so counterfeit passive components like resistors, capacitors and inductors are very easy to counterfeit and so they are mass market counterfeit products. But counterfeit passive components can only compromise point (A) and (B) but are very unlikely to compromise point (C)
Also counterfeit discrete active components like various Transistors etc.. are harder to counterfeit so going to be statistically less of them (but they still happen), but more importantly they can compromise point (A) and (B) but are also very unlikely to compromise point (C)
So we are left with counterfeit complex active components like processors which are the only form of counterfeit products that can compromise point (A), (B) and (C) ... but if its a counterfeit processor, then this "shrink wrap" tamper evident CPU design isn't going to be of much use as they then have access to change the CPU design anyway.
Also by far the vast majority of counterfeit active components like processors turn out to be simply empty packages. They look real, until they are powered up then you find you have an empty plastic package with leads that go nowhere once inside the package. They are also high value items for counterfeit gangs so they earn a lot of money from them (so they do make a lot of them) but they are not a risk to point (C) because they do nothing. They are just little blocks of plastic with wires going nowhere.
So I'm having extreme trouble in reconciling this idea that counterfeit elements of any kind are relevant to this anti-hacking processor design product. It sounds completely like spin in the wrong direction. Sure microcode can be compromised but that has nothing to do with counterfeit components and everything to do with processor design flaws. So what they are really trying to say is everyone but them is designing their processor wrong, so please buy their design.
So all this talk of "five percent of the global electronics supply chain includes counterfeit elements" is a FUD story to try to sell their products. Sure microcode hacking is a potential issue. But what percentage of a market do they really have for this level of protection. Its a useful feature but why all the FUD to try to sell it. Plus like others have said, the vast majority of hacking is using valid instructions so hardware protections of the CPU are not going to work to stop the vast majority of security holes.
Finally, some clarity
MinionZero, thanks for breaking this down, bringing some clarity to my thinking about the possibilities presented.
I would have to say that human history has made me a bit paranoid about privacy, and you were able to help me settle what's realistic and what isn't about such a scenario.
Quis custodiet ipsos custodes?
Re: The obvious way to steal an election
Is actaully to just put your backdoor in the software and claim it's a trade secret.
This is, after all, what the providers already use to keep their machines and software from being scrutinised for bugs and security vulnerabilities.
Seems to me
a better way to ensure secure hardware would be to have an open testing standard (with requisite testing hardware) that could be performed as part of quality control in the country where the cpu is to be used. You could have different tiers of security based on the percentage of a given lot that is tested (100% for the top security tier).
Build the device in country. Plug the CPU into your device and it runs a series of tests looking for accurate results. Anything unexpected is a fail. You could even upgrade the tests if some new series of operations becomes common/used for something sensitive.
It certainly isn't foolproof, but at least it can't be simultaneously compromised along with the CPU itself.
Sure it will, bridges for sale?
As always the problem with these kinds of things is that while they can come with built in back doors and will not stop a sophisticated hack like those in various governments they can and will be used to stop anyone from reporting about them. For example, that these chips have anti-tamper measures gives them protection under DMCA and similar legislation.
Another plus if you are a chip maker is that you can stick it to the chipheads who circumnavigate all around your over clock "protection". Yet another plus is that if you are M$ you can finally start making that content management service pay dividends as you get another way to lock down content.
Mind the Gap.
ref Stephen R. Donaldson to see how this can all go wrong.
At the present time ALL (for most values of ALL) malicious stuff is SOFTWARE. The current hardware is all nice and functional, and does its job. The hardware doesn't need to be compromised as those intent on malicious intent have the easier vector, software. Since software is SOFT, it can be readily changed (and is) so why bother with the hardware, which is orders of magnitude more difficult to change, and fewer people actually know how to do it.
While this IS a nice idea, it is a solution to a problem that really doesn't exist.
That name seems to have been used already.
Chips with everything
Ah, but who's checking the checking chip?
There is specialized hardware that manages to do a better job of preventing something like this, and it is usually known as tamper-*proof* hardware. Some of the crypto chips inside the System z mainframes use it, and most of the DoD uses some type of tamperproof hardware. Mission-Impossible style self-destruct occurs if someone tries to open these babies, thus making the "tamper aware" thingy redundant.
And then again ... few will try to go down the hardware "hack the CPU" approach, and if they are going down this route, you're probably screwed anyway.
This article is about counterfeit silicon. I bet you can create counterfeit IBM crypto modules. All that you cannot do is to transfer the keys into that copy of a piece of hardware.
Of course all of that is rather expensive, but if the Commie Party of China is behind you, it definitely is doable. And I really can't see how any circuitry would detect that, as the circuritry can be observed and modified, too, if you create completely new silicon.
The only proper way to verify hardware is to remove the case/assembly and check the circuit layout with some sort of microscope and computer against the original design files.
- SMASH the Bash bug! Apple and Red Hat scramble for patch batches
- BENDY iPhone 6, you say? Pah, warp claims are bent out of shape: Consumer Reports
- eXpat Files 'Could we please not have naked developers running around the office BEFORE 10pm?'
- Vulture at the Wheel Renault Twingo: Small, sporty(ish), safe ... and it's a BACK-ENDER
- NASA rover Curiosity drills HOLE in MARS 'GOLF COURSE'