Hackers have released tools that unlock the software stored on heavily fortified chips so researchers can independently assess their security and spot weaknesses. At the heart of the the release, which was announced Wednesday at the Black Hat Security conference in Las Vegas, is Degate, software developed by Martin Schobert for …
...before close inspection of microchips becomes a criminal offence?
I assume that it already is
... it already is ...
Reverse engineering already outlawed by Intellectual Property or patent 'law' ?
hmm, better tell visual6502.org then. The ability to peel chips is a useful forensic tool.
Using the information directly to produce something in the same market is a different thing altogether. Just ask Atari when they copied the NES10 chip.
That's pretty cool...
... Now I won't get any work done today!
Examining anything is not illegal ( though IIRRC, analysis of media protection technology is - hmmm, long arm of the RIAA corrupting our law ) - anyway, even using information so gained is not illegal in most cases, like the "clean room 486", Microsoft network protocols, *.doc formats, etc. - as long as it is replication of function, not of implementation. Oh and said function must not be a patented device.
Hacking into smartcards is fine, as long as there is no hint of conspiracy behind it. Clearly the intent of the disassembler is defining factor, the white/black hat thing.
Equally clearly, this has not stopped the authorities duffing-up white hats because they're so much easier to catch, but that's fat lazy coppers for you.
#Reverse engineering already outlawed by Intellectual Property or patent 'law' ?
That can never be. The word 'patent' means openly known. It is applied to a brief license giving exclusivity as a reward for inventing and publishing, so we all may know.
Exactly, the law only comes into play when...
the knowledge gained is used, and that depends on how it is used and what it is used for.
This can only be a good thing ...
hiding bad algorithms in hardware is an attempted form of security by obscurity -- this is seductive by misguided. If the good guys can do it - then the bad guys & governments will do so as well; we need better algorithms, not secret ones that are flawed.
If they want to do that they need to change up their algorithms and...
go for the smallest possible feature size on the silicon. Adding some 'junk DNA' to the mix would be smart too.
@ This can only be a good thing
ALL security is security through obscurity of how to defeat *defenses*. You make a better algorithm, they just work around it another way. You spend your time fortifying that /other/ way instead and then they break the algorithm, or another way.
Ultimately the resources spent to guard the *prize* are always finite and have to be weighed against the value of the tech, it's lifespan, and overall security level instead of idealizing only one part of the equation. Little to nothing can't be hacked even if it requires they toss a researcher into the back of a van, haul them away and torture them for a few months.
All algorithms can be broken, hence this is a bad thing
All encryption algorithms can be broken, hence this is a bad thing.
The idea that there is some mythical hack proof algorithm or full function operating system is silly.
The only hack proof encryption algorithm uses one-time sheets, and even that can be hacked by getting a copy of the one-time sheets.
All algorithms can be broken?
The RSA cryptosystem for example, cannot be "broken" unless some very interesting advances in mathematics are realized. These advances may or may not exist.
Only a temporary reprieve
An acquaintance of mine was working on a technique to hinder visual inspection of microchips for his PhD. I seem to recall it involved printing a chunk of security circuitry across a lumpy substrate above the chip itself. Printing was used rather than the usual lithography techiques so you didn't need a flat surface, which made it slightly more awkward to open up than normal chips. The materials involved were also a lot more delicate, so are much harder to expose cleanly using acids.
Still wouldn't be perfect, but you can bet these sorts of techniques will be used to defeat this sort of analysis in the not too distant future.
How clever are these people?!
The processes involved to reverse engineer the alogoriths from the circuit alone is mind-fsking!
Nothing new here that wasn't known about in the 19th century. If security of a cipher depends upon attackers not knowing the algorithm it isn't going to be secure unless you can prevent enemies obtaining _any_ instance of the system, which in practice prevents the system from being widely used.
That is why modern cryptographic security depends upon secrecy of a software (i.e. changeable) key. If an enemy obtains an instance of an asymmetric crypto system using individual embedded private keys then other instances remain secure once the public key of the compromised system is revoked. If symmetric keys are used, the system security property based upon secrecy of the symmetric key is restored when all instances using the symmetric key are securely replaced.
The WW2 Enigma codes were symmetric, and suffered the weakness that new keys being distributed to field units were encrypted using the old keys, so once the old keys were compromised, so were any new keys secured for distribution using old ones.
There is no secure encryption algorithm period
Secrecy or no secrecy, there just plain is no secure encryption algorithm.
The security of all algorithms is merely in the time and resources it takes to break them.
Whether that security is academic study and time breaking into a secret, or determining the key, it all comes down to time and effort.
So this effort is on a par with the criminality of publishing part of a private AES256 encryption key.
This is why hiding stuff in hardware is bad...
Remember Sony and the whole PS3 security debacle? There's this persistent train of thought among the people who are proponents of such hacking efforts that when you buy the machine, you automatically have ownership not just of the hardware, but of the software (which is wrong - legally speaking). Now, the reason I mention this is not to get that whole discussion going again, so let's just not go there, but rather to emphasize something that has always been stated in such examples. The purchaser owns the hardware.
Let me say that again, the purchaser owns the hardware. The firmware is software stored electronically, and is not part of the hardware and is therefore not owned. But, the purchaser of a piece of hardware owns that piece of hardware. Now, here's the important part of that;
If a company designs a custom crypto algorithm into hardware and sells that device to people, the purchasers *own* it. they can do whatever they want with it, they can peel it open, put it under an electron microscope, scan it, analyze it, do whatever they want - because they own it, it's perfectly legal. That custom crypto algorithm that is physically expressed in the hardware, well, the purchaser owns that too. There is nothing to stop them from analyzing and reverse engineering it, nothing. The law only says anything about what can be done with the knowledge gained through reverse engineering. However since most people engaged in cracking custom crypto systems to make products that facilitate piracy are in the business of breaking the law anyway, it really doesn't matter to them, does it?
Perhaps appropriately, if the crypto is expressed in software there is more recourse in the law against those attempting to crack it, than there is if the crypto is done in hardware, because software is protected by copyright, license terms and all sorts of other wonderful legal tools.
So, as this kind of thing becomes ever more possible, expect to see crypto moving to software again, and expect hardware to come with very small amounts of flash memory that can physically only be flashed once, and which cannot be read off chip, unless you happen to have a rig that allows you to tap the various data bus lines on the die itself, which is not something that is currently feasible outside of the blue sky research done in places like IBM. That will be used to store fragments of code and or key fragments that can be used in an isolated secure processing environment to provide runtime crypto services. Either that or I think we'll see other solutions such as requiring an always on connection and regularly updating crypto services through encrypted software updates. Of course that will require the constant refreshing of the encrypted content too, but in a world of apparently limitless bandwidth, that isn't impossible.
Either way, the industry will not stand by and watch hardware crypto fall without some level of response, and since it's practically impossible to prevent someone from looking at the physical architecture of a piece of silicon, I'm sure we will see hybrid solutions that also reflect the legality of cracking attempts.
This pleases me
that is all.