Re: what about the silicon ?
"Agreed. It seems unlikely that given what else they've been up to, the spooks haven't attacked the low level stuff.
I've no idea whether or not spooks have attacked the low-level stuff but I wouldn't be a bit surprised. Another obvious and almost guaranteed method of attack on low-level hardware and firmware will come from industrial espionage targeted by commercial operators (industrial spying). Then there are governments with plans for secret and or mandatory backdoors into smartphones. However, for the protagonists to effectively implement or render any of these schemes into silicon they would need to be well resourced and well funded.
This isn't entirely idle speculation either. No specifics, but the matter of whether certain important security hardware had been 'gotten at' and had 'suspect chips' came across my desk about 15 years ago. In IT terms, that's eons ago. If the issue arose with me that long ago then you might like to speculate what's transpired since. I'll leave you to guess. It's ludicrous to think that back then that we alone were the only ones who were considering such matters—of course not. Furthermore, even back then such ideas were far from new (had they been I'd like to think I'd have patented the concept). ;-) (BTW, a while back, I was laughed at in these posts for having the hide and temerity to even suggest such things were possible with silicon. Oh dear.)
At the time, we were concerned about the possibility of specially designed Trojan components, ICs, ASICs etc., with 'modified' silicon being incorporated into systems and equipment—devices that had normal part numbers together with the usual physical characteristics etc., ones that worked in the usual standard/specified way so as to fool investigators but upon receipt of secret commands would also perform 'extra' undocumented (hidden) tasks.
Your post about Ken Thompson's 'doctored' compilers together with that of JimmyPage about Silicon, immediately reminded me of issues we'd actively investigated years ago, specifically backdoor techniques can also be incorporated in or patched into silicon compilers—consider it, its ramifications are just humungous. At the time the matter arose, our principal concern wasn't the exact specifics of how Si or other components were modified or could be altered, rather it was how to go about detecting 'dodgy/suss' ICs, and other components—or even complete subsystems or modules that may be fully encapsulated but suspect (does one test or break open a black box?). And, I can assure you, it ain't easy!
Since then I've examined the matter at some length, Si compiler hacks are feasible and doable. That said; don't immediately jump to paranoiac conclusions that it's happening everywhere. Essentially, it'd be a large operation that'd need resources of such a magnitude that only governments and large corporations could command (it'd also need complicity between government and foundry).
Nowadays, users have little or no control over BIOS/flash ROM as they once* did, instead HW and SW manufacturers have combined to essentially lock users out of that part of their PC, they've now little more than token access. Similarly, there is considerable potential for 'compromised' chips, ASICs and other 'modifiable' hardware to compromise security completely. PCs are not alone; many types of tech gear continue to be highly vulnerable security risks as they too have BIOSes, and as technology becomes more dense, the problem is only going to get worse. Instead of manufacturers closing loopholes, they keep them open or even widen them for their own convenience. BIOS hacks and patches (such as the patched ROM hack that you're likely to have on your laptop this very moment), are easily installed by both manufacturers and hackers alike but the reverse is seldom true for users.
* It's not an exaggeration, I used to compile the assembler BIOS source in my Godbout CompuPro. It gave me incredible control over the system, it's control I do not have in the PC ;
So who's the true beneficiary now that your mobo's BIOS is easily hacked and controlled? Right, it's not the user (as fewer and fewer tweaks are now being made available in the BIOS UI of many modern machines—fewer tweaks mean fewer help-desk calls, but it also means many machines are running sub-optimally.) Not only is control being continually whittled away bit-by-bit from users in the BIOS but it's also happening in Windows. With every new release, users have increasing difficulty in accessing the 'low-level' areas within in Windows (it literally takes me weeks to tailor Window the way I want it. It sucks, and it ought not to be necessary.)
This begs the question about who's ultimately responsible for security breaches when manufacturers deliberately remove control from users. For example, once mobos had 'hard' DIL switches that couldn't be tweaked by Windows or remote hackers, now there's no physical DIL switches at all, as MS and others forced their removal only to have them replaced with 'soft' switches—switches Windows can now actually command. (And, in many instances, that's worrying). Moreover, why do hard disks no longer have a mechanical write-protect switch as they once had?
The ever-increasing 'soft' control over vast amounts of our new (and older) technology is becoming a serious problem. For example: once a sluice gate on a storage dam had to be opened manually, now, more often than not, it's opened by software from some remote terminal, and (stupidly) there's no human to hand if something goes seriously wrong. Matters are then made even worse when this 'improved' now-more-fragile system, becomes the subject of threats, hacking etc. We've now instances where critical infrastructure has becomes more vulnerable simply because of the actions of idiots who want to remote control a previously intrinsically reliable system that's worked for decades—solely on the grounds that it can be done (the excuse being that it's cheaper)!
What happens next would put a circus to shame, the government and its band of woolly thinkers have to tighten security laws around critical infrastructure because they've weakened it—and in the end it'll cost a damn side more to fix, than if they had initially left it alone. In the world of engineering in which I grew up, we would deem this madness. More accurately, we're now living in a world where the digital addiction meme is more contagious than the influenza virus.
It doesn't require much stretch of the imagination to figure where the next step leads: mass government surveillance has come about, not because it makes sense but because it can be done easily—put it together with the fact that people have fallen in love with the technology and the woolly thinking becomes even woollier, as they're no longer thinking rationally (the masses falling in love with their tools is, historically, a unique phenomenon—in the past tools meant work).
Like the Emperor's new clothes, almost everyone's now caught up in a wave of general madness, either because they want to see the 'clothes' or they don't want to admit that they can't for fear of embarrassment. Correct, this is crowdsourcing gone crazy! Plato covered this in the Republic millennia ago—when you're sick, go to best advice available: a doctor; only a bloody fool would prefer to take the alternative option, that of asking the LCD-opinion of a crowd.
Ipso facto, we're now more vulnerable to government snooping, violations of privacy by manufacturers, infiltration by serious hackers and of being exposed to data and ID theft, now it's clear why. Moreover, there seems little progress through the impasse: technology is progressing leaps and bounds, yet seemingly simple problems such as using it are fraught with difficulties. Deep-down, as years of experience attest, Microsoft et al truly do not want the great unwashed in full control, we're simply not trusted—and because they didn't listen to us, that's exactly why we've ultimately ended up with abominations like Window 8.
All this ought to be of considerable concern to users, but I see little in the Big Picture Department except inaction. (Somehow, there's still a widespread belief that Microsoft etc. knows best. It's utter bullshit of course.)
Hope I'm wrong but I reckon the security issues will likely soon reach the stage where the security at the bottom (physical) layers of the OSI model will be sufficiently compromised that business communications over the internet will degenerate into a hit-and-miss affair—or shroud that be 'mess'. Embedded and other systems, who've had it good until now will also be in for a similar shock. Pessimistic perhaps, but I don't think so. After all, if it were possible to equate dissimilar standards, what other industry could be so degenerate as to have the equivalent standards to that of IT security? Security so poor that the NSA, GCHQ, DSD etc. look into our private lives with such ease that it's as if they're looking through transparent glass; then, almost every day, El Reg has spectacular stories such as the heist of millions of credit cards; and then there's the almighty Microsoft patch saga: year after year, decade after decade, MS continues to issue thousands of patches for its mouldy, over-bloated, holier-than-Swiss-cheese operating systems, and yet it still has the audacity and unmitigated hide to pontificate that its OSes are state of the art.
It's hard to deny the IT Industry has sleazebag ethics that'd put used car salesmen to shame.