... the Prequel to I-Robot?
A pair of security researchers claim to have found a back door in a commercial field-programmable gate array (FPGA) marketed as a secure tool for military applications. The FPGA in question is the Actel ProASIC3, a device manufacturer MicroSEMI recommends for use in “portable, consumer, industrial, communications and medical …
... the Prequel to I-Robot?
"One imagines the presentation will be rather well attended"
Should they make it to the presentation in the first place. I just just picture the men in unmarked cars removing the findings or having a quiet word in an ear or 2....
Or another great big carry bag in the bathtub type accident. These researchers are no doubt deviants with a (so far) secret and very dangerous proclivity for certain auto erotic practices which are soon to become evident...
PEA = Polyethylene analysis. You find someone who knows the secrets of the chip and put a plastic bag over their head, and refuse to remove it unless they tell you what’s going on inside.
Pipeline Emission just to dirty for you? needed some protection?
Swinging in the breeze
Very interesting. However this doesn't read to me so much as an attack on the chip itself as much as an attack on the marketing for ProASIC. Perhaps the "sponsor" is Xilinx?
+1 for fingering Xilinx. If the story is right, this is very bad news for Microsemi. If I could be arsed, I'd be shorting them right now.
If anyone would want to find out if its possible to hack those chips it'd probably be british intelligence. After all if they're so widely used knowing how to hack them gives our bonds an extra weapon, as well as knowledge of how to prevent it.
Either that or its the terrorists.
I can just imagine US agents sneaking in and trying to kill them, only to be confronted by british intelligence which will culminate in a shoot out of epic proportions...
I claim movie rights!
There have been plenty of chips made before that have undocumented "features".
The late Saddam had a network-rooting HP printer. Or so I hear.
I'm sure the MSM circuit and popular press (not to mention all the behind-screen self-styled national security tacticool operators) will go into overdrive, with Richard Clarke pumping out "chilling", "eminently believable" and "eye-opening" prose. Oh well.
If you read their blog you could feel that the thing is likely hyperbolized for the sake of self-promotion. Another researcher has analysed the actual paper and came to the same conclusion:
Yes. I thought the 'Details at a forthcoming event' made it sound like a puff piece. Interesting though.
...that it's unlikely to be Chinese. However nobody claimed it so especially that it is the product of a company from upstate New York.
The blog you linked to is nonsense. The author has no understanding of JTAG or what it's used for. Paragraphs 3 and 4 could have been written by a child.
Your refutation is duly noted.
Yeah, I noticed that http://erratasec.blogspot.com/2012/05/bogus-story-no-chinese-backdoor-in.html was also referenced in the other Reg article that appeared yesterday on this story. I know nothing about 'Errata Security', but I've got a few minutes now, so here's a detailed explanation of how I can discern that the front and back ends of the author's oesophageal tract are inextricably linked. Note also that the original pdf cited by the Reg doesn't contain the word 'Chinese', which rather makes Mr. Graham's blog pointless. I also don't immediately recall the original paper suggesting that this feature is malicious. The whole point of the paper is to show how wonderful 'Pipeline Emission Analysis' is, and they've done a good job of that.
JTAG was developed to test *circuit boards*, not *chips*. It tests PCB connectivity. You shift a test pattern into the JTAG port, and these bits are driven onto the device pins. You can also read the state of the device pins, and shift the information back out of the chip. If you've got multiple devices with JTAG capability, you can determine whether the pins of device A are actually physically connected to devices B, C, or D. You can use software to trace out a significant portion of the physical copper routing on the board, while the board is powered up in the manufacturing facility, and determine if the chips are connected together correctly. If they're not, you fix the problem or throw away the board. This is JTAG pre-101. JTAG was developed for manufacturing test; period. Every EE knows this. It's not a "debug feature" that has to be "disabled" before chips are sent to customers; that's just bollox that would make the entire feature pointless. Paragraph 4 is therefore just nonsense. To suggest, in the last sentence, that a supposed secret debug feature could be disabled by connecting the pins on the chip but not routing them on the PCB is also dumb. Practically every EE on the planet has soldered wires to pins on a chip.
Now, JTAG is so useful that it's also used for other stuff. In particular, it's frequently used for loading serial configuration data into an FPGA. The customer buys a blank chip, and may use the JTAG port to "program" it. Of course, the manufacturer doesn't disable this feature, or the customer would end up with a pointless lump of plastic and silicon. JTAG may also be used for some internal test of the chip. The manufacturer clearly can't disable this function either, since they would end up with an untestable chip that can't be sold. So para 4 is nonsense on all counts.
That leaves para 3, which is so ridiculous that it hardly merits analysis. But, in short, JTAG is not the "debugger", and it is not the "standard way of soldering some wires to the chip and connecting to the USB port".
The whole article - not just these paragraphs - has the feel of being written by someone who doesn't have the expertise to write a blog on the subject, let alone a blog that's actually read and quoted.
What if someone decides this undocumented feature constitutes a deviation from spec?
"Yes, hello, I'd like to return 50,000 faulty units. Also as 20,000 of these have already been built into our product you will need to meet the cost of scrapping or re-working these pieces"
The Laundry's been using these back doors for ages..
I think you may be getting your HOG confused with your SCORPION STARE capable surveillance cam.
Not that they won't both come in handy when CASE NIGHTMARE GREEN goes down...
Coat. Mines the one with the official laundry issue. "WARNING: Content may be destiny entangled with eldritch horrors from beyond space time" HSE compliant mug in the pocket...
"SCORPION STARE capable surveillance cam."
You need to have the target in view from two separate cameras for SCORPION STARE capability.
CASE NIGHTMARE GREEN?!!
Tony Blair comes back??
... how likely is that a device like this could be re-programmed, soldered to a circuit board as they often are as well as being connected to other passive and active components on that board (which would interfere with access attempts, rendering field 're-programming' or access to the hidden/undocumented features features VERY unlikely)?
next you'll tell us how a passenger jet could never fall out of the sky due to being riddled with malware.
if I was very determined I could easily strip the main board from whatever and stick it on a bed of nails designed to stimulate only the pins of this device.
I mean, sure, if it was on a common power rail you might end up energising that but hey ho, end result would just be the other components powering up, getting confused and not doing anything.
Alternately I could buy a device from a competitor which contains this FPGA (which may or may not be made by me but either way I know how to fudge it) and do whatever to take a peak at all the lovely encrypted stuff of my competitor.
I think you should leave the mountains where they are.
Actually, this is normal procedure when testing newly-manufactured boards. The board is powered up in the factory, connectivity is tested via one or more JTAG I/Fs, and various bits may be "programmed" via the same I/Fs. If there's a JTAG back-door somewhere, and you know the details, then it's trivial to exploit it.
They simply found some (normally inaccessible) debug code intentionally left on the chip like it is on most chips of this ilk.
This is one of the routes that hackers take to attack encryption techniques on stuff like the PS3, XBox & iPhone.
Milspecs come in many many variants so much so that they cover a range from normal everyday fragile kit all the way to ultra secret robust stuff.
Thing is, for most advanced militaries the rule is: "if you didn't design, build and assemble it yourself (or under your control), don't trust it".
I find it extremely unlikely any half-competent armed force would use off-the-shelf stuff for anything sensitive.
By that reckoning, about 90% of the world's militaries are half-competent. The UK, US, Russia and China are all massive military exporters and make a lot of cash selling military tech to other nations.
US used dodgy Chinese-made parts in army jets.
Better source, googlefu was failing me. http://www.bbc.co.uk/news/world-us-canada-18155293
I don't doubt your sources but in reality just how mission critical could the components in question be?
I'd hazard a guess, not very.
And yes, those of us that depend on others' tech for our kit shouldn't be surprised when there are hitherto undisclosed vulns.
Thanks for the sources
... but perhaps these are simply testing features left behind - a hardware version of debug statements?
Quite likely I should think. You find this sort of thing quite a bit with the 'hardened IP' blocks found in recent generations of Xilinx and Altera chips too. They will have configuration buses hanging out with mysterious extra bits. The design tools hide them from you usually but if you dig around the HDL they spit out, you can find them.
For stuff like on board PLLs, they appear to be for applying fudge factors which weren't finalized at tape-out time. None of it is publicly documented anywhere though.
It might be an accidental backdoor instead of a malicious one. Nevertheless, if the spec calls for a secure device that can be programmed and then not read or tampered with, this backdoor is a critical fault.
Imagine if it protected keys that secured some DRM feature, the keys would now be theoretically unprotected and the DRM system would be open to losses.
oooh, that would be terrible, wouldn't it?
There's stuff called Requirements Engineering and Independent Verification and Validation. "Hey can you account for this stuff here on the die? Whaddya mean, 'no'?"
Can you just drop in a whole submodule into a floorplan given to you by the customer w/o anyone noticing? Hard to believe.
One poster in slashdot proffers the following explanation, found at http://it.slashdot.org/comments.pl?sid=2879043&cid=40137395:
FPGAs commonly protect user-code with encryption. An encryption engine is included in the silicon to which the user has limited access to crypto=keys with which to encrypt the code that is installed in ROM/Flash.
A number of attacks are known against microcontrollers/FPGAs that secure code with encryption - notably differential power analysis (DPA) which works by connecting a current probe to the chip, and collecting measurememnts of energy consumption as the device performs an authentication operation. By carefully, measuring power traces over thousands of authentication operations, statistical analysis can reveal clues about the internal secret keys; potentially allowing recovery of the key within useful periods of times (minutes to hours).
These secure FPGAs contain a heavily obfuscated hardware crypto-engine, with lots of techniques to obstruct DPA (deliberately unstable clocks, heavy on-chip RC power filtering, random delay stages in the pipeline, multiple "dummy" circuits so that an operation which would normally require fewer transistors than an alternative, has its transistor count increased, etc.). The idea being that these countermeasures reduce the DPA signal and increase the amount of noise, making recovery of useful statistics impractical. In their papers, this group admit that the PA3 FPGAs are completely impervious to DPA, with no statistical clues obtained even after weeks of testing.
This group have developed a new technique which they call PEA which is a much more sensitive technique. It involves extracting the FPGA die, and mapping the circuits on it - e.g. using high-resolution infra-red thermography during device operation to identify "interesting" parts of the die by heat production under certain tasks - e.g. caches, crypto pipelines, etc. Having identified interesting areas of the die, an infra-red microscope with photon counter is focused on the relevant circuit area. As it happens, transistors glow when switched, emitting approx 0.001 photons per switching operation. The signal from the photon counter is therefore analogous to the DPA signal, but with a much, much stronger signal-to-noise ratio, allowing statistical analysis with far fewer tries. The group claim the ability to extract the keys from such a secure FPGA in a few minutes of probing with authentication requests.
The researchers claim to have found the backdoor, by fuzzing the debug/programming interface, and finding an undocumented command that appeared to trigger a cryptographic authentication. By using their PEA technique against this command, they were able to extract the authentication key, and were able to open the backdoor, finding they were able to directly manipulate protected parameters of the chip.
So it sounds like this is a "backdoor" in the way that a cast-iron safe is "unsecure", in that it's possible to drill out the lock--if you can remove the safe to your own lab and have an unlimited amount of time and access to all the tools and resources you might want.
Hardly the Ghost In The Shell hack-your-cyberbrain stuff the stories make it seem.
but the FPGA tag suggests that these debugging backdoors are left in by the customer who has stupidly forgotten to disable them in the production run, or alternatively knows no useful data passes through them.
Or the chip foundry in China?
Strange this. A big telco supplier I worked for had problems with industrial espionnage years ago. Some chinese wafer fabs were churning out designs identical to our design plans even before we had our wafer fab set up to make our own prototypes... then these clones hit the open market, integrators throw this latest and greatest (and cheapest) into their kit and flog them to telcos, then find that they either do not have the same QC as our parts, or that, as here, some hidden extras added in... This is problematic when that component is laying 3000 feet down on the bottom of the pacific transporting god knowns how much of Asia's interweb traffic and either fails or starts calling home...
And this is why we take two bottles into the shower instead of the one. Like firewalls - two from different places are better than one and relies upon the maker of back-door #1 not talking to the maker of back-door #2. :)
Support diagnostics or back door - such a grey area that has made many people playing fruit machines much money.
All good ,most of the graphics controllers and bios chips for the last 10 years have them ,your in safe hands girls and boys ,:),for your own protection of course :(. ,now where was i ,thats it ,i lost my keys ,when i find them i can decrypt that spurious low level traffic that isnt firewalled .
for well over a decade it has been suspected that Fab Nation this and Fab Nation that have left little doors and little keys in their complex silicon. in fact, the folks who might be most suspicious, the DOD, have a very small list of security fabs in which all of their most special silicon is built, and in which the security weasels themselves can have trouble getting in. the old Bell Labs, aka Lucent, now IBM fab in Vermont comes quickly to mind... used to be they made custom chips for all comers, network and modem makers alike having the complex bug in the middle of the board fabbed at IBM. no such, any more.
Self-promotional jingoistic hype - the discovery of debug code is not an intentional back door, nor was it placed by the chinese...
While they did find a backdoor in a popular FPGA chip, there is no evidence the Chinese put it there, or even that it was intentionally malicious.
It knows all, controls all, reports all. Brought to you by Stuxnet's White Box Line.
I would love to comment on the story about this story:http://www.theregister.co.uk/2012/05/29/researcher_trolls_internet_with_silicon_backdoor/
but the reg wont let us leave comments
Stuff up or Conspiracy
The processor I use can be unlocked by prying the cover off the chip, and exposing the lock area to UV light -- after selectively re-covering the rest of the area.
Bravo to Mr.Chirgwin for reading our comments and the linked-to blog.
Just to clarify, I was not implying this particular issue would cause any aircraft to fall out of the sky, I was simply pointing out that things that "would never happen" sometimes happen.
Oh and El Reg jumped on the "Milspec" bandwagon, why else throw it in the title?