Impressed by the Dutch Fuzz
Then again, the crooks should have been using a p2p implementation.
Dutch police claim to have snooped on more than a quarter of a million encrypted messages sent between alleged miscreants using BlackBox IronPhones. The extraordinary claim was made in a press conference on Tuesday, in which officers working on a money-laundering investigation reckoned they had been able to see crims chatting …
If the cops compromise the people running the service, they could simply modify the P2P software to send copies to a central server and push the update.
How many crims are going to sniff their outgoing traffic and figure that out? And if they do, how many will still be suspicious when they call support and are told the stuff being sent to the central server is harmless diagnostic information, to enable them to improve their software?
Standard fare in Android, from the Guardian Project (no relation to the Mancunian rag).
Central server is a massive no-no, as even without breaking the encryption you have access to all the metadata.
"Custom" OTR implementation?
3000 per year plus the douchy name tells you all you need to know about a) the security of the product and b) the credulity of the target customers.
Without evidence of a criminal activity they seized the servers.
Without evidence you've assumed this. The article mentions a drug lab, it is likely they already knew or at least suspected they were involved with drugs, and if you're a drug dealer you obviously have to be involved in money laundering, so...
Depending on the circumstances under which they seized the servers, they might be able to look at all their customers if those circumstances made it likely they were mostly criminals (i.e. they sold them on a dark web site that is invite only for drug dealers, for instance) or they might only be able to target certain individuals that they have other reasons to suspect.
"Well, yes, but I'd say paying €1.5k for 6 months with a phone with "unbreakable encryption" and "a panic button if you get nabbed by the fuzz" is probably reasonably grounds to suspect it's not just a private conversation about what groceries to bring home."
Then why dont the police go out and arrest anyone driving a car that has an engine larger than a 1.6?
Honestly, anyone wanting acceleration from an engine greater than 1.6L is intending to speed, possibly while out-running the police after robbing a bank or kidnapping a child.
I saw someone driving what looked to be a Morgan recently. A wooden expensive car with a high top speed and huge acceleration! I shook my head as I drove my Hyundai Getz 1.6 (the "i'm innocent" limit) thinking of how many horrible crimes he must be involved in.
Why are fast cars on the market?
Why dont the police wire tap the phones of those who purchase them?
In a country that has a speed limit of 70/80MPh there is totally no need for anyone to even sit in one of these crim-cars unless its on a track and has a special license like a gun owner would.
Use your common sense man.
Although I agree with the general sentiment, they also could have just grabbed the customer list and listened in on their conversations in the 'traditional' way using a directional microphone and a court order to monitor a person suspected of committing a crime.
This wholesale grab of all data just rubs me the wrong way.
Someone like Snowden could be using this service and with the dutch government bending over backwards to the US interests usually, it wouldn't surprise me if this would be abused.
>Not at all. Probable cause (in US speak) is not the same as being declared guilty, that is the prerogative of the court.
Not really. That is the prerogative of the jury unless the defendant waives his/her right to a jury. A judge can give a directed verdict of not guilty, but cannot declare guilt.
"I believe it was 11 September 2001 if not before."
No, that was when the US learned that what goes around comes around and that terrorism wasn't just something that happened on an island across the Atlantic and was probably harmless anyway so there was nothing wrong with contributing a few dollars here and there.
"that was when the US learned that what goes around comes around and that terrorism wasn't just something that happened on an island across the Atlantic"
No, we already knew that from all the terrorist actions that came before 9/11. What the US re-learned from 9/11 was how easy it is to amplify and leverage fear in the population so that the government can get away with performing atrocities that would have otherwise been politically impossible.
If Dutch police have cracked this supposedly-secure communication channel, announcing it will serve to kill the channel and drive its users to an alternative.
As if Bletchley Park had announced to the world that they'd cracked Enigma. Which might have materially affected the War.
Dutch police presumably realise this, so it must be intentional. Why? It's a pretty high-value resource to give up!
The article mentions that they wanted to prevent retaliation within the group.
If they had this information then I imagine that the Dutch police could be considered at fault if they did not act, particularly if innocent 3rd parties were caught up in the potential attacks.
Usually authorities admit to stuff like this for one of two reasons. One, word has leaked that this happened (i.e. the guys who were arrested figured it out, or a cop on the take ratted them out) so there's no harm in making it public. Two, they will need to present evidence in court where they will have to disclose how they obtained the information so the cat's out of the bag if they want to get convictions.
1. Was the hosting company in bed with plod all along?
2. Same question applies to ALL public-server-based communication.
Maybe we need much more (privately encrypted) peer-to-peer communication, and much fewer public-server-based services.
Oh....and internet cafes also help!
So, not only were the comms not encrypted end-to-end, as is often claimed, but, if I understand correctly, there was no way to securely exchange encryption keys, e.g., at a personal meeting between Alice and Bob, to prevent MITM.
I have a distinct impression that the vaunted "end-to-end encryption" of WhatsApp, Telegram, etc., suffers from the same kind of flaw.
> So, not only were the comms not encrypted end-to-end
It's quite possible they were end-to-end encrypted *before* the Dutch Police got their hands on it, but relied on the server to aid in key exchange (or perhaps to specify some other important element).
If that's the case then they may have adjusted the server so that the client's unknowingly did KEX with the server instead (so that it could MiTM).
Even then, though, you'd hope that 2 clients that had seen each other before would then warn their owners that the other ends key seemed to have changed. The various "standard" OTR plugins you get for various apps all do at least that
> if I understand correctly, there was no way to securely exchange encryption keys, e.g., at a personal meeting between Alice and Bob, to prevent MITM.
I read it that way too - or at least, if there was a way it wasn't widely used (and probably wasn't the default).
That's fairly common amongst OTR libraries though, some won't even let you import keys from another system (so if you have multiple devices you end up with multiple 'identities'), so probably not too surprising.
Most, though, do provide a fingerprint for you to verify out of band, others let you use a challenge/response mechanism (again, out of band), and would show the fingerprint as unverified until you've told it otherwise. Perhaps that got dropped while they were customising it?
Can't find an awful lot of information on their implementation on the net, but with the very limited information that is available it does sound like they customised OTR and made it worse.
Why is there always an assumption that encryption on the internet can only mean ALL these things:
- users are using public-server-based communications (e.g. email)
- users depend on the public-server(s) for encryption
- each specific communication has an identifiable sender
- each specific communication has an identifiable recipient
In the place of these assumptions, suppose users did it differently:
- put in place a private cipher system (say a book cipher)
- the sender publishes a cipher message from an internet cafe using, say, The Register Comments as an Anonymous Coward (or using a fake identity on FB....)
- the recipient picks up the message in another internet cafe
In these alternative circumstances:
- it will be hard to identify the sender
- it will be even harder to identify the recipient
- ....and that's before the curious out there try to break the private cipher (irrespective of any end-to-end encryption provided by the services provided over the internet)
What am I missing here?
"Even then, though, you'd hope that 2 clients that had seen each other before would then warn their owners that the other ends key seemed to have changed."
This is a case of hanged if you do and hanged if you don't. If you use the same key all the time any messages which have been intercepted and stored in the past can be decrypted if the key is later compromised - which is more difficult if the server didn't store the key - but you can tell if the key's been changed. If you use a different key each time then past messages are safe but the key exchange is susceptible to MitM attack if the server is compromised.
"I have a distinct impression that the vaunted "end-to-end encryption" of WhatsApp, Telegram, etc., suffers from the same kind of flaw."
Whatsapp uses the Signal protocol. Adopted from the Signal chat app that is fully end-to-end with MITM protection. But as its now owned by Facebook, we might find something changes eventually.
Telegram has always been broken. They were audited and failed as they had "rolled their own" crypto, which you simply dont do. Telegram has the marketing but not the features. Its end to end encryption is off by default and it relies on a homegrown encryption method that is considered to be buggy and untested.
Use Signal, or something that implements the Signal protocol. Or Threema which is also good.
Best thing to do is listen to the EFF and Edward Snowden when they make recommendations. Its worth noting that the EFF have stated they have serious concerns over Telegram. Edward Snowden uses Signal almost exclusively.
Signal is also entirely licensed under the GNU GPL v3 and GNU AGPL v3. Unlike Telegram which has only parts licensed in any "open source" way.
Now politicians will finally have an iron-clad excuse to get backdoors into encryption. Look how it helped in the Netherlands, they'll say.
While I applaud the results, I fear for our encryption. The Dutch police didn't backdoor anything, they got a warrant, seized the server, and did their business. That's legal. Backdooring encryption for the purpose of snooping on everyone all the time is not only illegal and impossible, it's also highly immoral.
"they got a warrant, seized the server"
There's no mention of a warrant in the article. Even if they did get one then all the traffic through the service was compromised. There seems to be a presumption that all the traffic was illegal. If you were using the service to negotiate a confidential but legitimate negotiation - say a merger - you now know the Dutch police had access to it. They were snooping on everyone, at least everyone using the service. Who, apart from the Dutch police, knows what legitimate stuff has been compromised?
Biting the hand that feeds IT © 1998–2019