Nice try
In order to defeat the Trojan that the NSA has installed on your PC, they'll need to encrypt the data before it gets to your keyboard. And then the NSA will initiate a covert program to have us all chipped.
The cleverest clogs of MIT have squared up to the NSA after claiming to have developed a PRISM-proof encryption system. Dubbed Mylar, the spook-bane allows devs to build web applications which are protected from attackers, even if they have access to the server that stores the software. Its creators were upset that anyone who …
This is supposed to defeat snooping on the wire. You'd need key loggers on every device in the world to get the same coverage they can currently get server side.
The fact that this tech can be compromised on the device means you need further security, but that doesn't completely invalidate this approach.
In order to defeat the Trojan that the NSA has installed on your PC, they'll need to encrypt the data before it gets to your keyboard. And then the NSA will initiate a covert program to have us all chipped.
Err, no, the "duh" here is that MIT has become yet another set of propellor heads that fixed what didn't need fixing. The security of Lavabit and Silent Circle was OK too. They didn't close shop because there was a technical problem, they closed because they discovered themselves legally defenceless against lawful demands for data - which they had to provide unencrypted.
It really doesn't seem to sink in that the current issues with security are not technical at all (or maybe there is such a large amount of technical noise to camouflage the real issue): whatever brand of crypto safe you use is entirely irrelevant if you can be legally forced to open up the damn thing for any official who feels like having a peek.
This has actually been a massive problem for a few years for US providers, but it's only now becoming very clear after the EU decided not to yet again bow to lobbying and blackmail (IMHO, the US political position was dramatically weakened by the Snowden revelations). Technology is not the problem - US law is. And as we are dealing with federal law it will take YEARS to fix, if ever.
Sorry I'd missed the part about all the NSA's snooping being lawful and legal. I'd somehow got the impression they'd been engaged in covert and illicit activities.
Alas, it appears as long as the US decides it's OK, the whole world is declared free game (that judgement is but one example). But you miss the point. Even in the unlikely case that NSA activities get declared illegal and some budget shuffling appears under the name of "fine" (I'll wait until the laughing has died down here from NSA et al), the problem remains that US law has become such a soup that due process quite simply no longer exists, and to a lesser extend that is true in the UK as well.
The magic word used to be "communist", now it's "terrorist", but the excuses to bypass due process in the name of "protecting us" just keep on coming. As long as a government has a need to rely on "national security" to avoid scrutiny of its actions, you must keep on asking "what do THEY have to hide" - exactly because they would want us to really stop being so awkward.
Let me quote you almost verbatim some lines from the *excellent* Worricker trilogy currently being aired by the BBC, from the episode Turks & Caicos:
Downing Street used to be the height of a man's ambition, now it appears to be only a stepping stone. Alec Beasley isn't a politician, he's a statesman, an altogether more profitable line of work.
After all, he's going to have to do something after he leaves office.
Is there any territorial dispute in the world he's not qualified to settle for a large salary from the luxury of a suite in a five star hotel? Lucky, isn't it, that he has a fund waiting for his global good works?
Remind you of anyone?
What's news here? In-browser encryption has been proposed (and rightly laughed at) a hundred times before.
The problem is that, if an attacker has control of the server, they can compromise the code that your browser is running in order to do any number of malicious things - for example: pretend (but don't actually) encrypt the data; send a plaintext copy somewhere else; encrypt using some known key; encrypt using some deliberately weakened cipher...
Until browsers provide native support for crypto functions (see http://www.w3.org/TR/WebCryptoAPI/ for the W3C's efforts) with some unspoofable UI (I'm not aware of any attempt to standardise this), such suggestions are FUD. And even then, one has to rely on the security of those crypto functions - which, as we now know, have been seriously compromised by TLAs.
One can go and read their paper http://css.csail.mit.edu/mylar/mylar.pdf
Their "solution" is nothing new: use a browser extension to provide a trusted codebase. Many others have done this before.
Problem is, there's no guarantee that the webapp is correctly utilising the extension and not spoofing it. Hence what I said above re requiring an unspoofable UI.
@nickety: One can go and read their paper...
So I did - thanks for the link. They are aware of the simple fact that to *share* data (as opposed to storing it "in the cloud" for one's private use with free backup or something) users need to exchange keys. All I saw in the paper was an acknowledgement of a need for a trusted 3rd party that would sign the keys. They seem to think that separating the "IDP" ("identity provider") from the server that stores the data is novel and significantly better than letting the server itself verify the keys. Huh?
I am very reluctant to chalk it up as even a partial success against an adversary such as NSA/GCHQ/FSB.
Yes!
This is what we need to fight back against all those who consider your data fair game for their own purposes. A pity this won't affect the likes of Facebook though.
I was praying that the smart people outside the security organisations would be as angry as a lot of people here on El Reg and started to push back. And now it has started.
I do have a worry though. Will the likes of NSA and GCHQ etc. be content to let this happen? Possibly not, and any researchers in this field would be wise to watch their back closely. We are of course up against organisation whose raison d'etre is to secretly manipulate and interfere with what they perceive as threats, often not to the country which they are supposed to be protecting, but themselves.
Still it's a start.
"A pity this won't affect the likes of Facebook though."
As usual, it depends on the value of the data being slurped, doesn't it.?
They can slurp mine all day -- a profile can be built up but still won't know my name, age, gender etc.etc.
At the same time how many that crow 'I'm not on Facebook!' never check to see what info thier 'smart' phone/tablet is broadcasting?
They can slurp mine all day -- a profile can be built up but still won't know my name, age, gender etc.etc.
Never underestimate the power of a jigsaw attack. If they track your actual IP (even if they have to work their way through some proxies), they could talk to your ISP and get the vital details (which they know since you're their customer) which they can check against public government records, etc.
..on whom you perceive your 'adversary' to be.
I have been constructing a private 'cloud' to enable a distributed organisation to access sensitive data across the internet: Why? because the alternative, people holding some or all of that data at home or on laptops is far far more risky than having it held centrally under a few sysadmin's control, and if it does leak, you know where to pint the finger.
Oddly enough its close to the sort of model the NHS needs. Limited access to GPs of only their patient's data, but broader access for say, certain people in a hospital environment. And statistical information only, for health care analysts.
Here you know who the adversary is, and its not the government. Not at the spook level. Its individuals who could profit by revelations of private data. Or competitors who could profit by access to commercially sensitive data.
In browser encryption is nice, if you trust the browser. BUT there is a problem in a multi-user arena: you want the data entered by one person to be available to another, or its no damned use. So in the end you are down to a name/password being all that is required to unlock the data.
And guess where the name/password combo has to be stored? On the server...
Now I may have misunderstood this but it seems to name that unless you are say using a different way to validate and authenticate users that I haven't thought of, in the end you are assuming at some level that the machine that authenticates is inviolate.
But that machine itself has a sysadmin.
Lets say you have a database containing encrypted data and also user names and passwords.
If you take a copy of that, you merely have to duplicate a machine install the data on it, and run a password cracker against it.
Sooner or later your private cloud is entirely open to someone else.
In essence it boils down to a simple fact. Man in the middle attacks can with care be circumvented.
But the endpoints will always remain vulnerable. Forget digital: think 'cipher letter' If you get an enciphered letter to have to decode it. If someone is watching you do that, or gets access to the deciphered letter, it's broken the code.
Likewise if the conditions of being sent that ciphered letter are that you have presented credentials to some third party, that third party itself if compromised can give the same or different credentials to an attacker.
In any given security case there is no perfect answer. You have in the end to trust the endpoints.
Personally one server and one sysadmin is to my mind more trustworthy than arbitrary browser code.
But neither are inviolate.
"I have been constructing a private 'cloud' to enable a distributed organisation to access sensitive data across the internet: Why?..."
Since some of your fundemental assumptions on security and encryption are wrong, you are probaly not the best qualified person to create such a cloud. I suggest you google Diffie-Hellman key exchange to get some of the basics. Then you can revisit everything you have done so far...
Since some of your fundemental assumptions on security and encryption are wrong, you are probaly not the best qualified person to create such a cloud.
Indeed. I wouldn't trust any single person - not Eric Young or Steve Henson or Phil Zimmermann or Bruce Schneier or anyone else - to create from scratch some sort of "private 'cloud' to enable a distributed organisation to access sensitive data across the internet [sic]". That requires strong theoretical knowledge of cryptographic algorithms and protocols, information security, and networking; and it requires the correct implementation and verification of far too much code.
Even if someone were building it out of well-established components (OpenSSL, say, with symmetric authentication and some suitable application layer), there are still too many moving parts to have a good chance of one person avoiding all the likely pitfalls. The probability of creating anything more secure than the typical VPN is pretty low.
Which, of course, is why we see this sort of thing coming from someone who doesn't know enough about communications security to know what he doesn't know about communications security.
I suggest you google Diffie-Hellman key exchange to get some of the basics.
I wouldn't (suggest that). Someone who writes "it seems to name that unless you are say using a different way to validate and authenticate users that I haven't thought of, in the end you are assuming at some level that the machine that authenticates is inviolate" needs to start with a good text on computer security (like Anderson's Security Engineering), move from there to a book on cryptography (probably Applied Cryptography), then to something like Rescorla's SSL and TLS to see how cryptography is applied in a real system to address some of the security issues.
This stuff is as far from trivial as it gets. Just the intricacies of X.509v3 extensions can occupy a few days' research. Building distributed general-purpose cryptosystems is very much specialist work, where a little knowledge is not just dangerous - it defeats the entire purpose, and provides a false sense of security in the bargain.
(How do users authenticate themselves to untrusted servers? With trusted-third-party protocols like Kerberos; or with PKI protocols like the X.509 certificate hierarchy or the PGP web of trust; or with zero-knowledge-proof protocols like SRP. None of those require the user and server share a secret. In fact, even simple password protocols don't require that the server have the user's password; that's what authenticators like salted password hashes are for. And there are ways to avoid letting the server see the user's password, for example using a 1WA. Or the client can authenticate the server before sending credentials, as is commonly done with SSL/TLS.)
>> But they have to get the key logger on every device to cover the internet.
So they file a secret order for Microsoft and Apple to issue a patch with a default key logger into their OS's, they are US companies after all. Actually most "windows" receive key and mouse events anyhow, so not much to add it actually since it is there already by design.
Plausible deniability (a la TrueCrypt etc) is a perfectly sound defence. "Here's my password to that encrypted virtual disc and as you can see, there's nothing naughty on it." That there is another encrypted volume within that one is not demonstrable and certainly not provable, not even by the NSA nor CESG (the correct name for GCHQ). AFAIK... unless someone has evidence to the contrary.
So you show them the blank 10G partition. Most plods, especially the computer forensic kind, look at the hard disk, see it is 1TB, and can figure out that there's 990G or stuff that you're hiding from them.
Heck they probably have a batch file/script which detects this and they don't even need to do the mental arithmetic.
That's not how hidden partitions work.
Both the decoy and hidden partitions use the full capacity of the hard disk. When accessing them normally you must provide both sets of credentials so truecrypt can ensure that the two partitions don't both attempt to use the same blocks on disk. If you get pinched, you give up the password for the decoy partition and forget about the hidden one. With only one set of credentials truecrypt assumes there is only one partition and allows it to use the full disk.
I think there are some smarts so that when accessing the hidden partition the decoy credentials are automatically available so you don't need to type them yourself, going the other way obviously you need to type both.
They are being told that this is needed to find terrorists. …. Pascal Monett
I wonder how quickly they will realise that all attempts at providing unbreakable encryption and security creates terrorists and new heroes within systems admin ranks to take over leading make over of systemically flawed operating systems. And the one is the other a la poacher turned gamekeeper, although in these particular and peculiar cases would that be more APTly phrased, crack hacker turned heavenly gatekeeper, given the doors that can be opened and locked closed with the keys that one would have.
And it is always best to have those bods and boffins/guys and gals on your side because of the unbelievable untold damage that they can so easily do.
Is that the same as if someone has hacked the nuclear weapons launch codes, only more so because the damage that can be done is more accurately and acutely and astutely targeted at real and actual command and control controllers and not at the masses, who let’s be honest about it here, are usually pig ignorant about the state of the worlds in which they are living and being used and abused and misused.
Advise them of the true nature of their condition and the position they be kept in through the contrived maintenance of their ignorance though, and methinks they will be less than well pleased with that and/or those which have been leading/ruling/enslaving them.
"Who are these people..."
Every now and again you'll get into some random conversation in a pub or on a train with some swivel-eyed paranoiac whose arguments make less sense than William Hague's, and whose worldview seems to suggest they actually hail from a parallel dimension, rather than Ealing as they claim.
Well thats them, or at least their spiritual brethren.
They tend to talk in Omens of Dark Portent, use 'you' more or less as an accusation (often prefaced by 'people like') and slur the word 'liberal' a lot, sprinkling conversations with an excessive use of words like 'patriotic', 'dangerous', 'proportional' and 'sacrifice', while putting a somewhat peculiar emphasis on 'democracy' . They tend to be on the bluer end of white and react badly to anything brown, non-english speaking or heavily spiced (unless they're from Texas), and often have no detectable neck or sense of humour.
I'm not talking about the lunatics, but the coders. Not that there isn't lunatics as well among coders, but, looking at what they do(and did), we are talking about bright people, and, I suppose, highly educated.
There's an esoteric academic subject named "History" that you might want to investigate. Bright, highly-educated people are the foundation of oppression. The foolish and uneducated might provide the brute force when it's needed, but it'd all be small-time thuggery if not for the clever sparks.
Question. How can they perform a keywoard search of encrypted documents without decrypting them first? Furthermore, if one has knowledge of the desired documents, couldn't one use keyword searching to narrow down the suspect documents, determine the owner of the singled-out document, and then bring out the rubber hoses?
I wondered that, and looked at the paper just long enough to find out that on encoding a document the system also encodes a list of the words it contains.
To search a document one supplies encoded words - the server can then say whether there's a match, but not what the words are.
Presumably though if the spies were already interested in a particular document, they could observe searches which gave hits in it.
Haven't read the paper, so this comment could be wide of the mark, but if a table of words is supplied to a server for subsequent comparison, that implies the same crypto key for all tables. That would seem to translate to hashing the words to generate a table of hashes for each document. So that then raises the question of rainbow table attacks. Even if the browser holds a salt which is kept private from the server, would this open it up to a frequency analysis attack, i.e. just have a rainbow table for different salts of the same high-frequency word?...
No. I skipped too much of the detail to properly understand, but it's not a general hash table. That would be an obvious flaw.
Looking at it again - the user computes a search token using their private key and the search-word. The server then computes search tokens for every document key they have access to using "deltas", which are "cryptographic values that enable a server to adjust a token from one key to another key". (I didn't worry about exactly how that works.) The deltas can be reused for other searches - they are generated by the user on gaining access to the document (i.e. getting the key to decrypt it) in the first place, and given back to the server at that point.
There are still risks to this scheme, which they mention in the paper.
For example if you search maliciously supplied data (e.g. a dictionary), then the adversary can match the word to the user's token, hand hence determine the search word. So they mitigate that - you need to explicitly accept access to a document.
The biggest problem at present with government surveillance is not that they can spy on terrorists or selected people that they decide to focus on - it is that they can pretty much spy on everyone simultaneously. And this is what they're doing.
And they can do this while the data is en route (even with SSL, it seems) or stored on cloud servers (where the big corps all conspire or are forced to grant them access), so there is no risk of discovery by the surveilled.
If this makes it harder to do that, increases the resources required such that they can no longer do blanket surveillance of everyone all the time, then it is a positive step. Of course there are still ways around this, trojans and backdoors. But those have a risk of discovery, especially if they attempt to roll them out to everyone.
Not just "better than nothing" -- an important capability which needs to be widely adopted.
Of course, this doesn't stop all attacks, but it does stop one important attack: you can't just serve the provider of the service with a demand for the key (and an instruction not to tell anyone). The service provider doesn't have the key. This stops the Lavabit-style attack.
Sure, it doesn't stop a determined attacker from moving on to other things. But those things may be more expensive, more targetted (always a good thing), more risky, possibly illegal, less likely to get co-operation from 3rd parties and courts, etc. Anything which makes dragnet surveillance more expensive is good.
Ultimately, it isn't law which restricts the actions of spooks, it is cost. That is why, in the days when surveillance meant having a human being follow someone around, they didn't just follow everyone around. We need to do everything we can to make surveillance as expensive as possible, so it will be used in a limited way, on high-value targets.
Seen this at least a decade earlier with Hushmail. If you use the Java-enabled version of their service, encryption takes place on the client. The private key does reside in Hushmail's servers but it isn't decrypted on-site as long as you're using the Java-enabled version of the service.
Sure, the client code is stored on the server and could be tampered (and this being the NSA, they might even have a valid cert to sign their tampered code as well) but the logic's there.
What this MIT stuff does is something I've already done at least once for secure cloud storage. Somewhere on my 'land of dead project code' I have a piece of Java code that uploads stuff to Rackspace's Cloud Files storage but encrypts it in-transit and adds the key to metadata … said key is encrypted with someone's public key. Thus the data can be only decrypted by someone who has the corresponding private key. The concept isn't groundbreaking at all and anyone who is security conscious has been doing this for years. At least one employer basically crammed sensitive data inside a TrueCrypt portable drive and uploaded that to the Cloud Storage service du jour.
1. From the abstract, Mylar appears to be designed more to thwart industrial espionage than government intelligence operations - except, of course, where the two overlap. It may be useful to those planning cloud based applications, and cloud application providers should wish to provide it or something similar to support their customers.
2. I am a bit suspicious about security of a crypto setup where a value encrypted using one key can be transformed into values encrypted by other keys.
3. More general use of better encryption is not a bad thing, but is not a complete solution to the problem of data security.
4. This will not thwart subpoenas and warrants, although it could change the target for service, which would depend on who controls the key for decryption.
It's something called a homomorphic encryption where you can do certain operations on the plaintext by only having the cyphertext. It's doesn't exactly solve a real-life problem as today you can simply perform operations on a trusted device, i.e. your computer at home you can access via your mobile device if needed.