In these days of cynicism and paranoia, and in view of recent revelations the thought that something like Truecrypt could have originated from one or other of the world's security agencies seems quite valid.
Security researchers are raising funds to conduct an independent audit of TrueCrypt, the popular disk encryption utility. TrueCrypt is widely used as a tool to strongly encrypt and decrypt entire drives, partitions or files in a virtual disk. It can also hide volumes of data and said to be easy to use. The source code for the …
Chris G opined: "the thought that something like Truecrypt could have originated from one or other of the world's security agencies seems quite valid."
Indeed! What better way to spy on the world than to offer it free "open source" encryption which (a) cannot be compiled from its open source and (b) contains unexplained encrypted code in its downloadable binary but is nonetheless "guaranteed" to have no back doors ?
"Shake for me girl! I wanna be your backdoor man!"
@Noel and Jim:
(i) it's the Windows version that is very tricky to compile from source - and there are both benign and suspicious explanations for this;
(ii) Matthew Green has a good reputation technically and ethically - it doesn't seem likely that he would be involved in some game to pretend to audit TrueCrypt (unless you take the completely unworkable "trust no-one" tin-foil hat position).
In few years the next revelation will be like how intel was providing NSA a backdoor at level no one imagined.. i.e. closest to metal .. right incide their cpus... they might be mixing known info defused with encrypted info. As known info can be figured out from encryted data hence deriving the key. can anyone calibrate the traffic between NSA and Intel headoffice servers!?!
In the decades I've been in the software industry, I have met so many amateur qualified professionals, it's just not funny. "Oh, look, I have this fancy degree but I can't actually do anything that my degree supposedly certifies me as qualified to do."
Kenn White and Matthew Green need to say who it is that will do the audit, and why they've been chosen. I have personally worked on a project where the source has come to me in a 20Mb Zip file, compiled across multiple paths, and was just a pure mess of 2/3 C and 1/3 8086 assembly, and the compiler company had gone out of business. I got that entire product line compiling again. So, yeah, I'm asking the question, "who audits this?"
Sure, I'm all for an audit, but so far this isn't something that I would support with money.
Whilst I understand your view I can help but point out that its flawed. Unless they pick someone you personally know and trust you are a still going to be in the same boat.
Even if they pick someone you approve of - the vast majority of people arent going to be to share or verify that view so as a criteria for funding the effort or not its largely meaningless.
This is a very good point. I think everyone agrees that open source is now the only way one can potentially gain any assurance of no backdoors. But you still need to look very closely at the code and how it behaves - and, of course, you also need confidence in the audit process itself.
So a program to publicly audit key pieces of FOSS for security weaknesses looks like a good way to go and Truecrypt is certainly a good test case. But I think the real work that needs doing next is on the auditing procedure.
How do you produce a public audit process that is itself secure against possible attempts to infiltrate it and overlook security weaknesses? I suggest you probably need at least two independent and well-known (and trusted) experts, probably with support, to produce independent and public reports. Then you may need a separate independent committee to review those reports and draw attention to (and investigate) any discrepancies.
I see the involvement of many people as being essential in building a web of trust that can't be easily subverted. We should perhaps start to see support for auditing security software as being just as important as supporting the writing of the code. If we had as many people doing the former as the latter, we wouldn't be in this mess.
At the same time, we'll no doubt continue to rely on penetration testing by individual security researchers, as we know that regularly turns up obscure ways to defeat security. The idea of a bug bounty is a good one here, I think.
Just some random ideas, really, but I think this is a key area of trust that urgently needs attention.
Even if you have all the code, and reviewed all the code, you are still compiling it with a compiler in binary form.
So then you would need to compile the compiler again, with another compiler, but you have that compiler in binary form, so you can't trust that one either. So there's a chicken and an egg problem.
This problem is probably solvable by writing your bootstrap compiler and linker in assembler (have fun doing that, see ya in a few years), and using that to compile the gnu-c compiler, and then use that compiler to compile truecrypt.
But then again, you are still using IO functions from your developers os.. So maybe you need to recompile that first as well.. sigh...
I have an easier solution, if you don't want people to know some things, don't tell em, don't write it down, and certainly don't store it on a computer.
"I have an easier solution, if you don't want people to know some things, don't tell em, don't write it down, and certainly don't store it on a computer."
And if you have a bad memory or the information's not easy to memorize (like random data--poor fit for Memory Theater), you're basically hosed?
Anyway, it is possible to set up some chain of trust. You just need to hand-assemble something that can process a few bits of assembler code, use that to create a means to do more of it, and build up from there. Or you can hand-disassemble one of the low-level steps, verify it, then use the verified tool. Then you can take on a compiler with assembler code and build on up. And you can do all this from a bare-bones OS or from a setup where direct access is used, bypassing the OS. Just saying there are ways that don't have to take years. Weeks, a month or two, maybe, but not necessarily years.
I agree with Charles, the solution lies in the disassembly of the compiled code.
The algorithm for encryption is known, it should be possible to create a "theoretic "byte code projection of the few funcitons that exist, these in turn should be compared to the compiled byte code.
After disassembly of the public compiled code.
Each individual assembly function should be compared to the logic of the orginal assembler or C code. The logic must match, no extra routines or unnecessary comprisons shoud be seen.
The main elements :
How many decrypting functions exist, more than 1, Red Flag.
Which functions make calls to the decrypting function. Any more than 1 or two, start asking question.
Which functions handle the secret keycode - how does the function handle the keycode. Leading Space stripping, UCASING or lcasing, reversal, byte code comparissons, individual comparissons etc.... Red Flag.
What are the parameters types sent to the decryption function.
I would agree though that it would be painstaking work but definately possible especially when you have the original code.
I would also put a small mention out to +ORC's and his Martini-Wodkas...
Even if you have all the code, and reviewed all the code, you are still compiling it with a compiler in binary form.
In the real world, this is not actually seriously a problem. It can be defeated in theory....
...and I actually don't think a compiler exists that has enough swiss army knife functionality to look out for a few dozens programs just to put backdoors into the crypto parts unseen. Ken Thomson's initial idea was to finagle the lowly "login" program, which sounds feasible. Finagling GPG etc. via that method sounds like it needs a AI module in the package.
"Even if you have all the code, and reviewed all the code, you are still compiling it with a compiler in binary form.
So then you would need to compile the compiler again, with another compiler, but you have that compiler in binary form, so you can't trust that one either. So there's a chicken and an egg problem."
Although not solely related to security this approach is actually being used in FreeBSD (only learned this pretty recent myself).
If you compile a FreeBSD kernel (which is part of the source code for the OS) or the entire OS itself the first thing which is being done is compiling the base components which are required for building. These get placed in /usr/obj and from that moment on everything else is build using those new set of tools.
As mentioned this is mainly done for optimization and not so much security, but I suppose one could argue that this could provide a little(?) extra where trust in the build tools is concerned.
Still, in the end a bit offtopic considering that FreeBSD doesn't use TrueCrypt but relies on gbde (GEOM based Disk Encryption) and the geli cryptographic subsystems. (though TrueCyrpt is available as well as a separate program).
if you want a horror story look at what Ken Thompson did inside Bell Labs, and this is probably childs play compard to what is possible now, 30+ years later.
An early unix C compiler contained code that would recognize when the login command was being recompiled and insert some code recognizing a specific password allowing him in.
He also made the compiler recognise when it was compiling a version of itself, and automatically insert the code to do all this again. Having done this once, he was then able to recompile the compiler from the original sources; the hack perpetuated itself invisibly, leaving the back door in place and active but with no trace in any of the source code.
details are published in “Reflections on Trusting Trust”, Communications of the ACM 27, 8 (August 1984), pp. 761--763 (text available at http://www.acm.org/classics/)
I was about to comment on the same thing myself. It hardly seems paranoid, when it has already been publicly described, does it?
"It will be sure to stop (at least from the NSA) if it no longer exists."
Same people funding, same people spying, same game, different name.
On the other hand if the people at the top were to get locked up for their crimes (after due process of course, e.g. 5 years in Guantanamo), that *might* encourage others not to start up in the same game.
I have to say it's kind of sweet that many Americans seemed to have so much faith that their spy agencies weren't spying on them all along. I can see why recent revelations rattled so many of you, and it must have been genuinely quite nice to believe the assurances and platitudes for so long.
I've always assumed that traffic was routinely monitored not so much through paranoia (by all means watch my traffic, really don't care) but more... well they would, wouldn't they? They exist to spy, they are accountable to get best value for money, this way is cheap and effective compared with building up country-wide informant networks, probably nicer for the population too by and large. Pay people to spy, that's what they will do.
Christ Blackhurst, former editor of The Independent: "Edward Snowden's secrets may be dangerous. I would not have published them. If MI5 warns that this is not in the public interest who am I to disbelieve them?"
I mean, it's a neat sidestepping of all that paranoia.
I thought part of the NSA's mission is supposedly to be protecting Americans. How is it protecting Americans by weakening their privacy and protection from adversaries by weakening the tools they use for protection? Ok fine I get it their only mission is protecting the US government from the people but even then I bet some of their subterfuge has hurt some of the other government departments as well. Just the fact an ex general is running the program pretty much explains everything.
"I thought part of the NSA's mission is supposedly to be protecting Americans. How is it protecting Americans by weakening their privacy and protection from adversaries by weakening the tools they use for protection? "
"We had to destroy their privacy and sense of safety in order to keep their information more private and their lives more safe."
But IRL "Why should we give a s**t how the American people feel? We didn't bother asking them when we took their privacy in the first place. "
"Just the fact an ex general is running the program pretty much explains everything."
The NSA is part of the Department of Defense, which is why a General is running it. Its was a spin-off of the various spy agencies created in WW2.
They aren't specifcally fraid of American citizens, but they don't trust them either. There have been a handful of terrorists that were American citizens, so now the NSA et. al. are operating under the assumption that anyone could be a terrorist and thus don't trust anyone, unless they have a security clearance that they have verified.
The Windows version appends 64k of encrypted supposedly 'random' data to the end of a Truecrypt volume, whereas the linux version pads it with 64k of encrypted zeros (which is presumably verifiable by decrypting them)?
The Truecrypt team are meant to be encryption specialists... have they never heard of 'nothing up my sleeve' numbers? Or realise encrypted zeros are indistinguishable from encrypted random numbers?
Definitely something funny going on there. I'd be interested to hear their explanation for the difference in behaviour, as well as why they thought it necessary, and whether they appreciate unexplained data attached to a volume which isn't verifiably information-less is likely to cause concern in todays environment.
So I'm looking at the format spec and it looks like the last thing in the file should be the backup hidden volume header. If no hidden volume exists, it should be random data. (This is to provide plausible deniability for volumes that DO have one.) But you're saying that on Linux it's zeros instead? I'm not sure what that points to exactly, but it's certainly odd, and sloppy.
'The 'problem' with TrueCrypt is the same problem we have with any popular security software in the post-September-5 era'
Sorry to change the subject from all this intelligent chit-chat about cryptography but what happened on September 5th? Sainsbury's delivered my shopping nice and early for a change but I'm sure that can't be it.
Nothing changed when Snowden starting talking. NSA is what you saw. The shadow government(s) you do not see because you don't want to. It's funny to see everyone shocked and scrambling. It's laughable and pathetic. You'd rather be ignorant. So when they shutdown the NSA will everything be all rosy again? Perhaps you should be a little more simple minded and recognise the pattern. It's not going to change any time soon. Get used to it.
"Windows gives out zero'ed blocks of memory, "
*Windows* may give out zeroed blocks of memory - I don't recall because it has been a long time since I needed to allocate memory directly from Windows itself(*) - but so what? malloc() is not a system call, especially on Windows. It is part of the C runtime library, and on Windows it normally carves up big blocks of memory allocated from the OS. But, being malloc(), it doesn't do anything special with the contents (except in debug builds, where they are often poisoned with some arbitrary value - Visual C++ uses 0xCD if memory serves, to produce a 32-bit (or 64-bit on Win64) value that can't be used as a valid pointer to anything.(**)). In particular, except in the poisoning case, it doesn't vape the previous contents...
(*) It is even longer since I relied on the contents of a memory block allocated by malloc() being anything but 'uninitialised'. If you want guaranteed values in your allocated memory, use calloc().
(**) OK, when running a Win32 program on Win64, with certain marks on the .EXE, pointers could be from anywhere in the 32-bit address space, but this value was chosen when Win32 was king and there wasn't any Win64 on x86 because there wasn't any x64. At that time, .EXEs marked appropriately could have access to 3GB of address space, and 0xCDCDCDCD is in the fourth GB...
Building TrueCrypt from the tarball TrueCrypt_7.1a_Source.tar.gz comes with several requirements inter alia:
RSA Security Inc. PKCS #11 Cryptographic Token Interface (Cryptoki) 2.20 header files (available at ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20).
An audit should take this into account. BTW FTP is blocked by my firewall by default.
This is a no brainer to me. If the compiled binary TC supply has an extra 64K block of data that neither the source code nor TC can not or will not explain, then by definition, TC can not be trusted.
TC's silence about this matter proves to me that either :
A) they do understand why trust and openness is paramount in a security tool
B) they have something to hide
TC have just invalidated themselves through their silence. I don't have to be some sort of security expert or even a programmer to conclude this - its common sense.
Biting the hand that feeds IT © 1998–2019