57 posts • joined 16 Oct 2009
Re: It would be pretty hard
There is only one conductor - the copper tube in the diagram. One end is kept at say 10,000V (compared to a big copper ground plate in the ground) and the other is at 0V, connected to a big copper ground plate in the ground.
The copper tube is fairly thick in everyday terms; but considered as an area/length ratio, it is very thin, and its resistance is therefore considerable. The current in the copper is only about 2 amps.
The voltage across each repeater is only about 30V, though the voltage at each end of a repeater might be 9,910V and 9,940V. compared to ground.
Suppose there are 100 repeaters. Each repeater uses say 60W. and takes 30V at 2 amps; at 10,000V the remaining 7,000V drop is lost to the resistance of the copper tube.
That's maybe 15 years out of date; I think a modern cable uses a bit more power per repeater and less repeaters, but it should give some idea. Al, some use double-ended power supplies.
Another method is the teeny drop of hydrofluoric acid (HF). This needs to be automated to be hard-to-detect.
You remove the cover of the fiber, then cover it with wax, leaving a teeny hole on one side. You set up the HF-resistant optical tap pointing at the hole, then immerse it and the cable in hydrofluoric acid, which eats the outer layer of the fiber away, all the while monitoring the light which escapes from the fiber. When you get just enough light so you can read the traffic, you neutralise the hydrofluoric acid.
( The eventual clear plastic replacement for the HF liquid needs to have the same refractive index as the HF - but it is easy to change the RI of the HF )
This had the advantage over bending that the tapped light comes from a teeny source, making it more efficient and thus harder to spot. Also, you don't need to create enough slack to bend, which can be a problem in underwater cables.
Re: No need to splice fibres to evesdrop
Yes. Plus there is no service interruption if the cable is bent.
The USS Jimmy Carter deploys ROVs to find and expose the cable. Then they lower a shirt-sleeve-environment tapping room on a wire to the tapping point (why would they need to tap the cables inside the submarine?), and Vodaphone / Verizon / whoever subcontracters do the actual bending and tapping.
Best place to bend-tap is just after a repeater where the signal is strong and the bend-tap is least likely to be noticed. Best repeater is probably the first in the chain, which will be nearer land collection points and at lower depth.
Looking at a map of cables, two interesting places for GCHQ/NSA to tap cables - cables run by furruiners whose traffic they might want to look at, and which could not be tapped by a UK/US legal requirement - are in the Eastern Med and the Gulf. They would want some sort of land stations, like the GCHQ outstations at Seeb in Oman and Ayios Nikolaos in Cyprus, nearby to make backhaul easier.
As for NSA, their interests might also include Fortaleza in North Brazil, the British Virgin Islands and a couple of locations in the China sea and Sea of Japan. I imagine BVI would not be a problem, I do not know whether they have anywhere suitable for the other locations.
But they might use buoys for backhaul instead - a highly-directional high-capacity laser transmitter from a buoy which is raised say once per pass to a satellite could be made almost impossible to spot. The part which gets raised above the surface might not need to be more than an inch or so in diameter.
The USS Jimmy Carter then changes the very large batteries left on or buried under the seabed every ten years or so.
If I may quote Robert Morris, former Chief Scientist at NSA: "Never underestimate the attention, risk, money and time that an opponent will put into reading traffic." I think the author of the article is guilty of that.
2FA- solving the wrong problem
AFAICT, the recent Apple cloud leaks were caused by a password-guessing attack. In order to guess a password, the script tried 500 or so passwords for each username.
Now if Apple had been monitoring failed password attempts, and stopped repeated failed attempts, especialy when a whole bunch of them for different usernames came from one IP location, this would not have worked. Apple were not using passwords in the right way.
AFAICS, Apple have now started to do this, which is why and how the attack has stopped.
Another method to defeat such attacks might be for the login username to be different from the public username, making it hard for an attacker to guess a login username.
More, if Apple had emailed the celebs saying that there had been several failed password login attempts, especially those from unusual IP addresses, and the celebs had said "I didn't do that" then Apple could have been on an especial watch (and could probably have caught the attackers).
Don't get me wrong, password are a totally shit method of identification, and a really bad method of authentication. But my banks use them online, along with other methods: one (Lloyds) sensibly, one (Tesco) in an overly paranoid manner which actually detracts from security.
And like PIN passwords for debit and credit cards, if used correctly online passwords seem to work well enough for money.
If I make repeated failed password login attempts to my banks they lock me out, and want me to contact them. Very sensible, if annoying. However yesterday I forgot my itv player password, and made several wrong attampts to log in - and got shut out for 30 minutes. I mean, WTF?
Passwords are useful in their place, sometimes with added password-type or other security when needed, sometimes not, Sometimes they are used in stupid ways - why does ITV Player need me to login with a password anyway?
Passwords cannot usually protect against coercive attacks, but for everyday use where they are used appropriately and monitored suitably, they are still the worst - apart from everything else.
The real problem is that people do not use them appropriately.
C'mon, el Reg, these are commands, not requests.
Re: The draft bill says they can require the ISPs keep EVERYTHING FOREVER
FYI, "relevant" data is defined in ss.2(3) as data in the Schedule to the The Data Retention (EC Directive) Regulations 2009, which is exactly the same data they could demand retention for before the ECtJ judgement - so now new classes of data. Let's not get overexcited about that.
I don't know why that was hidden away in ss.2(3), but a lot of people have missed it. The Bill does not permit the required retention of any new classes of data.
There are some real power grab issues, but that's not one of them.
However you are right about there not being a time limit on the retention of data under a ss.2(1) Notice - only a 12-month limit on data retained under some putative future Regulations to be made under ss.1(3) (which can't be implemented until after Parliament gets back anyway). A Notice, on the other hand, can require an ISP to retain data forever.
Nah, follow the money...
I discard outright any possibility of it being an outside website hack - too hard, an attacker would need access to the TC website, the Sourceforge TC site, and to the code signing key.
The "Warrant Canary" theory doesn't seem to make a whole lot of sense either. It's possible, but why recommend BitLocker? When did someone have time to write all those code changes between being served the warrant and having to execute it?
The theory which makes most sense to me is that it was an at least partly commercially-motivated self-takedown by the devs.
The recent change in name on the otherwise "same old code and binary signing key" is possibly significant here - the developers, or perhaps just some of them, may want to start up a commercial product in the new name.
Their commercial aspirations are well-known, witness the previous license issues, the failed crowdfunding and donations campaigns, the "TrueCrypt Developers LLC" registered in Nevada (thanks to Piergiorgio Sartor for that info). And they already own a good chunk of the the IP rights in the TrueCrypt source.
The ending of the project was graceful, to some extent at least - people were not left with unrecoverable archives, and temporarily acceptable but not-as-good alternatives were suggested. A whole lot of work went into that.
It is obvious that this wasn't done in the heat of the moment - it must have taken at least several weeks to do the code revisions for the 7.2 release. There have also been hints (eg the robots.txt file) for about six months that something might be happening.
The only reason I can think of for doing all that work is maintaining reputation (or technical reputation at least - TrueCrypt devs are not exactly known for being people people, or for being particularly into "free open source" either).
No reasons why the code is/may be broken are given. Actually the "WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues" does not even actually say TrueCrypt is broken, just that it may be.
And the unfixed issues might be fixed later, in the commercial version.
Which would have been independently audited... at no cost to TrueCrypt...
I have seen nothing to suggest that the people who watched and "encouraged" believed that they were looking at a real person.
Some misunderstandings here.
First, the $280 million budget of the BULLRUN "dirty tricks" program does not include the cost of the "advanced cryptanalytic capabilities" NSA is developing. We don't know exactly how much NSA are spending on that, but the combined NSA and US armed forces cryptanalytic budget is said to be just over $10 beeeelion.
RC4? well, it ain't that great but - the NSA have lots and lots of encrypted traffic they want to decrypt. It comes in chunks called sessions - roughly, the time you "are connected to" a single website - and each session has a different key.
If the NSA had a method to break RC4, they would have to break it again and again for each session. That's a huge amount of work. There are some other problems too, about obtaining the needed plaintext - you can't expect to break a RC4 session key from just examining the ciphertext, there isn't enough of it. You need a crib. Not impossible, but again it's a lot of work.
It would be far more effective to attack the mechanisms by which the session keys are set up - mostly RSA, though people sometimes use ECDHE instead. The big websites only changed their RSA keys every couple of years. Break one of those and you can easily calculate several million, or even several billion, session keys.
Personally I think they may well have found a method to break RSA - each break might be expensive, but as I said they can get millions of session keys from a single break. They may have a method to break, or partly break, ECDHE instead or as well, but my money is on RSA.
And it doesn't have to be RSA-2048 either - there are petabytes or more of old ciphertext which NSA would love to decrypt, collected over many years, which was protected by RSA-1024. Heck, until a few weeks ago the vast majority of internet SSL/TSL sessions were only protected by RSA-1024 or equivalent. I think it's still well over 50%.
Lot of foolishness and puffery.
I haven't heard the Independent claiming they have had sight or have copies of the Snowden documents, just that "information on its activities was contained in the leaked documents obtained from the NSA by Edward Snowden".
However I expect it's all just some sharp-eyed reporter on the Independent repeating claims from an article in the Guardian on 21 Jun: http://www.theguardian.com/uk/2013/jun/21/legal-loopholes-gchq-spy-world which mentions the GCHQ support station in Cyprus (Ayios Nikolaos Station), and says they tap cables etc.
As that earlier Guardian article is at least partly based on the Snowden documents, I suppose the Independent isn't lying when it says their article is too.
Re: Is it really true that the best in the USA cannot crack keys?
Yes, or at least cryptologists and cryptographers generally think so. And if NSA can crack the ciphers in use today, there are better ciphers which they can't crack - those ciphers aren't used because they are expensive to use, not because they are secret or illegal or less insecure.
Though in fact, unless you are a real terrorist, it probably doesn't matter whether NSA/GCHQ can crack your codes.
WOT??? you say???
Think about it. The only way being able to crack ciphers is useful to the crackers is if some people think you can't crack them, and then those people use the ciphers to send messages whose content they want to keep secret from the crackers.
So if NSA/GCHQ can in fact crack AES-128 (unlikely) or RSA-1024 (just about possible), they aren't going to tell anyone they can. This includes everybody except maybe the top secret terrorist catchers (or maybe today's Watergate people if you are a cynic), but it most definitely doesn't include the FBI/Police authorities who deal with everyday crimes like drug dealing, kiddy porn, or murder.
Google "Churchill Coventry ULTRA" for an example of this.
Now NSA/GCHQ may attack eg hidden services using worms, and get a little upset when people find out how they did it - but if they can crack AES-128 (and again, I don't think they can), they aren't going to expose that capability for anything less than a 9/11 or nuclear attack. And possibly not even then.
No. This is how an online newspaper should protect sources (and journalists).
Oh dear. Expecting a source to compile GnuGPG and Truecrypt is daft at best, and injurious to sources at worst. Tor is vulnerable to a global adversary like GCHQ. Hidden services are vulnerable too - they only take one crack to break.
I suppose if El Reg doesn't know how to protect journalists and sources then the Guardian can't be expected to either, but it really isn't rocket science.
Start with the paper's website. Use SSL/TLS as default there, with a DH suite like DHE-RSA-AES256-SHA with a 1,536-bit RSA key. That means when people read the paper the communications are encrypted, and the use of a DH suite means that they can't use RIPA to force you to give up the paper's private key.
Now add random-length cover text - this means that the sizes of files are also obscured, so CGHQ/NSA can't say "that file is 45,678 bytes long, it's the image on page 32". Also add random-length traffic from the reader to the paper.
Now add a dropbox, and anyone can send files to the paper in encrypted form, so someone who is tapping the internet can't read it, and it's in the midst of a whole lot of other traffic. A dropbox per reporter is good - maybe beside the byline?
You will probably get lots of rubbish, which is all good cover - you don't have to read it.
Arrangements for reporters' communications, as opposed to sources or members of the public, are similar.
A full analysis will be £1,200.
Re: "Legally and procedurally sound"
As far as I can see what reportedly happened to Miranda most certainly wasn't legally sound.
Apart from some Northern-Ireland-specific stuff, the only purpose for which a person can be questioned (or detained) under Schedule 7 of the Terrorism Act 2000 is "determining whether he appears to be a person falling within section 40(1)(b)" ie, whether he appears to be "a person who ... is or has been concerned in the commission, preparation or instigation of acts of terrorism."
It seems Miranda's detention came nowhere near to falling under that.
There may be other laws under which he could have been questioned and detained, but not under Schedule 7 of the Terrorism Act. IANAL.
Re: If you have both the unencrypted and the encrypted version of the file...
If you can crack it you get the key - which may be used for other files which you don't have.
Lot of sound and fury, signifying ... nothing.
I would hope the authors (none of whom seem to be crypto mainstream) are not responsible for the puffery in this article. It's perhaps an advance in terms of coding theory, but not cryptography (which is completely different).
Cryptographers already know about low entropy in plaintexts and passwords. They don't often consider uniform sources, and they hardly ever think asymptotic equipartition to be relevant.
In real life, when considering resistance against a brute force attack, cryptographers typically assume that a plaintext is known, and therefore has entropy of 1. For advanced situations, they assume a chosen plaintext with an entropy of zero.
These are the main forms of theoretical brute force attack on a cipher. Some are pretty unlikely in real life, but a cipher which is not resistant to all of these will be rejected out of hand.:
Ciphertext-only attack - the attacker has only the ciphertext and what he knows about the sender - eg he speaks English - to help him.
Known plaintext attack - the attacker can find the plaintext for one message, and wants too find the key so he can decrypt more messages sent with the same key.
Chosen plaintext attack - the attacker can trick the sender into encrypting a message of his choice. In some cases this can be more useful when trying ti find the key.
Adaptive chosen plaintext attack, where the attacker choses a plaintext and gets the sender to encrypt it, then can choose another based on the resuylts iof the first encryption and trick the sender again. And so on.
Chosen ciphertext attack - the attacker can get the recipient to decrypt messages of his choice. Again he wants to find the key.
Adaptive chosen ciphertext attack - as in the adaptive chosen plaintext attack above but with ciphertexts.
Re: This is all entirely legal ?
I agree, a contract would not trump RIPA - but no offense under RIPA has been committed. Here is section 2, subsection 5 of RIPA:
"References in this Act to the interception of a communication in the course of its transmission by means of a postal service or telecommunication system do not include references to—
(a) any conduct that takes place in relation only to so much of the communication as consists in any traffic data comprised in or attached to a communication (whether by the sender or otherwise) for the purposes of any postal service or telecommunication system by means of which it is being or may be transmitted; "
If the data you are giving out is "traffic data" (as defined elsewhere in RIPA, see my next post) then giving it out is not interception, and therefore is not an offense under RIPA.
Re: This is all entirely legal ?
Sorry. but it isn't illegal under RIPA .
The data for sale is classified under RIPA as "traffic data", and you can do anything you like with that without it being classified as interception (see RIPA s2.(5)) - consequently the order you mention does not apply, as no interception (as defined in RIPA - which is not even close to the everyday definition) has taken place. :(
It might be covered under the Data Protection Act, which only covers anonymised or aggregated data if the anonymisation or aggregation is not reversible - and many people think it is almost impossible to do that irreversibly.
However, even then it is not a criminal offense under the DPA.
Heck, it isn't even a criminal offense under the DPA to sell sensitive personal data. It's just a "breach of duty", and the civil penalties are paltry.
Cesium is not radioactive
Cesium-133, as used in atomic clocks, is not radioactive (and afaik it's not "nuclear material", whatever that is).
T-Stoff was 80% or 85% (there were several versions) hydrogen peroxide stabilised with less than 1% oxyquinoline or more often phosphate. The rest was water.
Yes I know that Wikipedia says T-Stoff was 20% oxyquinoline, but Wikipedia is wrong. 80% peroxide with 20% oxyquinoline is a highly sensitive - breathe wrong and it goes off - high explosive.
Or it would be, if anyone was insane enough to try and make it.
Re: Is the error message only for me?
The other half of this (which is still only half of the problem) is that it is good to have a match composition which doesn't produce much gas - for instance, for the silicon match described the desired reaction is solid_oxidiser plus silicon gives solid silicon_dioxide (aka sand) plus spent_solid_oxidiser. There are no gaseous products.
We need some gaseous products though, to spread the hot molten extra silicon about. That's not hard though, as silicon plus solid_oxidiser doesn't burn nicely for any normal versions of solid_oxidiser, and the usual compositions necessarily include some gas-producing fuels.
But match compositions for vacuum use can't produce too much gas, or they will burn too cold, and the spray of hot liquids will be cooled so they are not hot enough to ignite the fuel grains, especially if the fuel grains are very cold.
And here we come to the other half of our overall problem, which our gallant 'nauts may face soon. Even if the fuel grain is ignited - it may then go out. The cooling from the expansion of the gases from the ignited face may be enough to cool the burning surface of the grain to the point where it cannot ignite the next layer of grain below.
By the way, if you doubt that cooling by expansion can do that, have a look at this:
It's the exhaust of a liquid hydrogen / liquid oxygen rocket engine, which was about 6,000 F when it was burnt, and which is below freezing point by the time it expands enough to get to the nozzle exit. So much so that icicles develop on the end of the nozzle.
Transfer of heat from a match to a fuel grain is not usually done by conduction, but by hot gas.
Burning match composition produces hot gas initially at the density of the composition - in other words the gas would be at what we might consider a very high pressure. The gas then expands according to it's surroundings, very high presure if it's tightly contained, and lower if it is open to some surrounding atmosphere.
If the surrounding gas pressure is very low then the gas expands more than it would do at atmospheric pressure - and the important point here is that when gas expands it cools. At lower pressures it expands more, and therefore cools more. The gas which is supposed to ignite the fuel grain is now cooler, and also less dense (so it has less heat-carrying capacity), than it would be at atmospheric pressure.
The standard way to solve this is to use medium-grain silicon powder or something like that in the match. The outsides of the grains of silicon burn in the match and get hot, and red hot globules of molten silicon (note it's a liquid, not a gas, so it doesn't cool by expansion) splash over the surface of the fuel grain, igniting it.
Thermite would probably work just as well, the hot iron droplets produced would not cool by expansion as they are not gasses.
Incidentally IIRC (and IANAL) I think it would be legal to do this in Spain, but it would not be legal to make a thermite match composition in the UK. However if you wrote to the HSE they might well give you a dispensation, they are quite good about that sort of thing
Re: Permitted inspection of SSL traffic?
I do not know whether a Cyberoam-type box could be used to let a bad guy read plaintext ssl traffic, or if so how easy it would be to do, even if the user's browser was set to accept Cyberoam-signed certificates and the box was operated by a bad guy who had access to the wires. It should not be possible, but ..
Just having Cyberoam-signed certificates accepted is not enough to be able to read plaintext - in order to do that the bad guy needs to know the secret key to the public key in the certificates, and the secret key is held "securely" in the box.
Without the secret key, even if a certificate is accepted, an authenticated ssl session cannot be set up - it is used to prove that whoever the user is talking to is the server on the certificate, as it knows the secret key to the public key in the certificate.
The box itself can read ssl traffic plaintext, but the plaintext content should stay in the box and be protected by the "secure hardware" which also protects the certificate key.
I do not know whether this is actually the case, never having seen inside a Cyberoam box - I rather suspect that some diligent digging inside the box might give access to plaintext traffic, but that it would not be straightforward. That's just a guess though.
However the box could be set to eg detect codewords and reject comms containing them, and I'd expect it would also alert the box's operator, so reusing CA keys is a definite hole.
Re: Interception of communications
It is interception (as defined in law - the legal definition bears little resemblance to the everyday meaning) as both parties have not agreed to it, but it isn't illegal under uk interception of communications legislation. It's a bit like scanning emails for viruses - in fact it's almost identical to scanning emails for viruses.
The relevant law is subsection 3(3) of RIPA:
"Conduct consisting in the interception of a communication is authorised by this section if—
(a) it is conduct by or on behalf of a person who provides a postal service or a telecommunications service; and
(b) it takes place for purposes connected with the provision or operation of that service [...]"
The cyberoam box is operated by the office 'net, who are providers of a communications service (as defined), so a) is complied with.
The purpose of the interception is to protect the office network from viruses etc, and the office 'net might not operate if the protection was not in place, so b) is also complied with, and the interception is authorised and therefore lawful.
Re: Well Duh! Twice!
Well DUH! again, but this time it's on me.
I suggested that a variable in-use key should be used, but that the CA key could be kept the same for different users. And of course that doesn't close the hole, the CA key should be unique to each office/subnet.
Suppose executive Bob sets his browser to accept keys signed by Cyberoam CA key "office". He then takes his laptop to a different place, where a bad guy has set up a Cyberoam box he bought on eBay to intercept traffic. Bob's browser will accept certs signed by the bad guy's bought-on-eBay box.
Getting from there to plaintext traffic may be easy or hard, depending, but it would at a minimum allow eg keyword detection. It's a definite hole.
Of course a shared key is also a hole for the usual reasons as well, eg someone at Cyberoam knows it, and it may be lost, demanded, sold - only people you trust can betray you, and there is no need to trust Cyberoam with the key.
Also "secure hardware" has been broken into before now, someone might spend 100 million to be able to break into a box he bought on ebay - and then he gets the key to all the kingdoms, whereas if a unique key is used for each office he gets bupkis.
And so on ...
Well Duh! Twice!
Well, duh! Twice!
Well Duh! twice - first, if TOR didn't realise that's how those things work , then they ought to have. Actually I'm pretty sure I have heard people from TOR mention it, several times, so maybe they just forgot.
Well Duh! again though, to Cyberoam, for reusing the same key. That is actually a pretty major security hole.
It can be fixed fairly easily, but maybe only by changing out the boxes. The in-use private key  should be internally generated, changed at intervals, and never leave the box..
 Cyberoam MITM ssl sessions in order to detect viruses. Cyberoam boxes are usually connected between the wild wild web and the office subnet. That's about the only way virus detection can be done on a subnet basis and still allow ssl connections, and it's how everybody else does it too.
To do the MITM they need to use a Certificate which is generated in the name of whoever the user is connecting to, so the user's browser thinks it's actually a genuine certificate from the website the user is connecting to. In order for the certificates to be accepted the user (or the office BOFH) has to set their browser to accept certificates issued by Cyberoam.
 which need not be the same key as the CA key; there are advantages to having the certificate key the same , eg it makes it easier to install Cyberoam as as CA or trusted Certificate Authority, necessary for it's proper operation. The CA key should then be used to sign the in-use key in the faked certificates.
Re: Just one question...
My first comment was a bit long, and part of it may be hidden - there's a link at the bottom left to "expand comment", and display the rest of it. The answer to your question, and more, is there.
In the first instance, 5 MW of electric motors driven by ship's power rewind the springs in 20 seconds (the carriers have 40MW turbo-electric drives). Or you can use a lot of sailors on windlasses, or a diesel engine, or whatever you like really.
Re: A-10s on carriers?
Lets see. The A-10's GAU-8 "CEP" from 4,000 feet is 80% in 20 feet radius. The GBU-39 (the smallest smart bomb, not widely deployed as yet) has a 50% CEP of 25 feet, with a further 100% lethal blast radius of 26 feet, and lethal shrapnel out to who knows where.
If the A-10's pilot is reasonably good, and especially if (s)he knows where I am as a friendly, I'd much rather be 100 feet from the aim point of an A10 gun strike than 100 feet from the aim point of even a teeny tiny smart bomb.
Agreed, only some A-10s are kitted out with the "newfangled deelies" , but it's a known upgrade. Which also includes the ability to drop smart bombs accurately, if that's what you really want to do...
In a carrier-based role, I'd like to give the A-10 a bit better air-to-air capability, but I'd also put some air superiority fighters (and 2 or 3 Hawkeyes, and a COD) on the carrier, for local defense if nothing else. But I don't know how much it would cost to modify an A-10 to fly from a carrier.
I also don't know why it's better to drop a bomb from 30,000 feet than a few hundred or a few thousand feet, but that's another story.
Re: Billion-pound electric catapults? Pah!
Oh, and before you ask - the reason for having two springs per catapult is so you can wind one up this much, and wind the other up that much, and thereby vary the total thrust produced on the wire, to adjust for different loads and different aircraft.
It's not as sexy as an electric catapult - but it's a whole lot cheaper, and a whole lot more reliable to boot.
Re: Billion-pound electric catapults? Pah!
My tongue was not (entirely) in my cheek, nor even close - the purpose of the fusee (as in a watch or clock, or even some crossbows) is to get the near-constant thrust profile right. It's easy to do, and you can get any thrust profile you might want.
From the wikipedia entry for fusee: G. Baillie stated of the fusee, "Perhaps no problem in mechanics has ever been solved so simply and so perfectly."
A-10s on carriers?
I don't know whether it would be possible to launch A-10s from carriers, but that would be - something!
Air supremacy is good, and even necessary, and light bombers and other ground attack aircraft are also useful, but for CAS there's nothing to beat the A-10 - and CAS wins battles.
Billion-pound electric catapults? Pah!
I had initially thought that the F-35Bs could be dismounted from the carriers and used in another, Harrier-like, role: eg in a zero length field situation from a clearing in the jungle, or the Falklands.
However I find they need a ski-jump to get any significant weight of bombs - F-35s are primarily light bombers - into the air from a zero length field. They would have to operate from ski-jump carriers or bases with runways, with a fairly short range and considerably lower (than eg a F-35C) bomb load from runways.
F-35Bs would be of almost no use if dismounted in zero length field situations, and of very little use in runway tactical/strategic situations unless the runway was unusually close to the action.
So, on to electric catapults which cost billions - or rather, not on. But wait-a-bit, why electric catapults?
A 30,000 kg aircraft needs to be accelerated to takeoff speed in about 1.5 seconds - say 65 m/s from the catapult and 10 m/s from the aircraft's engines, for a total of 75 m/s (or 145 knots). The aircraft accelerates at 5G maximum, the catapult path is 56m to 75m (about 250 feet) long.
That's somewhere around 75 MJ of kinetic energy supplied by the catapult. I agree that an electric catapult of that energy and power would be expensive (although £2 billion still sounds a bit OTT), but what is really needed is a simple big spring - or rather two 30 ton contrarotating springs with a fusee, in a housing, with a total weight of around 200 tons. A 5 MW electric motor winds the springs up in 20 seconds from ship's power.
Well-designed springs can deliver the required 1.25 MJ per ton with comparative ease - for instance, that's quite a bit less work than a car suspension spring does (when was the last time you heard of one of them breaking, in normal use?), and it would have to do it far less often than a car spring.
The whole spring/motor assembly including mountings would weigh about 320 tons, and it would be quite dense, so it might usefully be mounted near the bottom of the ship providing some ballast, and the energy transferred to the flight deck by cables in ducts.
Similar energy/speed fast-moving cables in ducts are used everyday in the arrestor gear of aircraft carriers. The spring assembly could of course be mounted almost anywhere in the ship, as required.
You might want two seperate housings, one for each of two catapults, mounted in different locations, with independent cables and ducts to each catapult, for operability, combat damage etc reasons.
In extremis, you could probably make one from old car suspension springs from the scrappy - 60 tons of springs, £70 per ton, that's £4,200. And no, I am not kidding. Lose ship's power? Put five hundred burly sailors on the windlasses, if they still have those aboard, and they'll rewind them in about 12 minutes.
Billion-pound electric catapults? Pah!
It is not unusual for US law and US Courts to claim jurisdiction anywhere in the world, eg they do this over the taxpaying requirements of US citizens.
Microsoft's statement is probably true in terms of US law, but it isn't quite as straightforward as it might seem.
I imagine it goes something like this: Suppose a US Government demand fopr data is made, and a Court order is made. The US branch office cannot obtain the data themselves, and they ask the UK office. The UK office says no.
What can a US Court do to enforce the order? A very long story, but in the end, nothing substantial. So while they may claim jurisdiction, it doesn't mean much.
To address the wider issue, what Microsoft are _really_ upset about is clouds. First, some law:
Data Protection Act, Schedule 1 part 1, principle 7:
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.
Data Protection Act, Schedule 1 part 2 section 11: Interpretation of the seventh principle,
Where processing of personal data is carried out by a data processor on behalf of a data controller, the data controller must in order to comply with the seventh principle—
(a) choose a data processor providing sufficient guarantees in respect of the technical and organisational security measures governing the processing to be carried out, and
(b) take reasonable steps to ensure compliance with those measures.
Another bit of law, about the WTO, but I don't have details to hand - if measures are taken by one country for the purpose of providing data security, they are not actionable under the WTO, even if they restrain trade etc.
And what it comes down to is this: Microsoft say that encryption and their "best practices" provide better security against unauthorised processing than let's say only keeping the data in a local office.
(the data controller is the only person capable of granting authorisation, as the requirement to follow the principles is upon him and no-one else, that's DPA section 4(4) I think offhand).
Which, if Microsoft were correct about the US Government's ability to demand data, would be immediately obvious nonsense - rather than the slightly-less-obvious nonsense it is.
(a UK data controller is required by law to protect personal data in his control against the US government as well as spammers and identity thieves. He's also required to protect it against the UK Government, who if they want it must get it through him).
It's long past time that the UK (and EU/EEA) Information Commissioners gave clear guidance that personal data cannot be stored in clouds. Full stop.
Whether the Police looking at a Blackberry archive is interception or not is actually very much more complicated than what has been said here. I won't go into the gruesome details, but it could go either way.
However, even if it is interception, the Police can still do it. According to RIPA S.1(5)(c) interception is lawful if "it is in exercise, in relation to any stored communication, of any statutory power that is exercised (apart from this section) for the purpose of obtaining information or of taking possession of any document or other property".
The police would be looking at stored communications if they looked at a Blackberry archive, presumably using a statutory power under PACE. The question of whether it's stored so the intended recipient can access it or not doesn't come into it at all.
That may be an important distinction for journalists and ordinary people, but not for the Police - if it's a stored communication they can access it under PACE, no matter whether doing so is interception or not.
There is another question to be looked at though - Would Blackberry keeping an archive of messages be interception? Undoubtedly, yes it would be, see S.2(2)(b).
Would that interception be lawful? S.3(3)(b) says it would be if "it takes place for purposes connected [...] with the enforcement, in relation to that service, of any enactment relating to the use of postal services or telecommunications services."
Afaik there is no enactment forcing Blackberry to keep Blackberry messages, so it's probably illegal for Blackberry to keep an archive [*]. But not the Police to look at it.
[*] unless it's "for purposes connected with the provision or operation of the service" - eg an archive which intended recipients can access. However if Blackberry started keeping an archive specifically so it could be accessed by the Police then they would be intercepting, and doing so illegally.
Albion's Balloon Launched Aircraft.
Not really SF, no "M", and eeeewwwww!
But it'd be filmable.
Not sure I believe this, but..
Greg's email claims: "the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA".
OCF = OpenBSD crypto framework
VPN = virtual private network, used to encrypt links between friendly sites
EOUSA = Executive Office of the United States Attorneys. Not, as Greg describes, the "parent body of the FBI", it actually liases between the US Attorneys and the DoJ. However one of the functions of the 90-odd US Attorneys is to prosecute cases for the FBI..
If true I don't think EOUSA will be very pleased that the FBI have been spying on them, or even attempting to - and they have the clout to do something about it.
@Cameras and overlays are ...
You miss the point - in this type of attack the pinpad is replaced by the bad guys with a fake. It's straightforward for them to collect the PIN from *their* pad, and the card number from *their* reader, long before they get encrypted.
If they wanted to, they could get *their* pinpad/terminal to tell them the 3-DES key in use.
They may occasionally do that in the US so they can get the data from the line, but in UK retail transactions PINs aren't sent to the card processor (they are verified offline by the terminal).
Cameras and overlays are not used in this type of attack
If the terminal is owned by the bad guys (they replace the real terminal with one of their own devising), it reads the PINs from the keypad and the card numbers from the stripes, then sends that data home.
No cameras or overlays are used to collect the PINs - those are mostly only used in attacks at ATMs, not attacks at shop terminals.
A very similar technique has been used in the UK, often at petrol stations, where the card numbers are collected from the chips and the PINS are collected from the keypad.
However that isn't enough to get money or goods in the UK, where chips are usually needed, so the combination of PIN and card number is typically used in countries abroad where chip readers are not used.
I think one of the problems quite a few people have about this is that they assume that once a communication - and please note, RIPA talks about communications, not messages or copies of messages - has been delivered or read then it can no longer be in transmission.
Traditional physical letters and documents behave like the stuff we played with as a babies - for instance they can't be in two places at the same time, or both hidden and in view - but electronic communications do not behave like those things, they can be (and frequently are) in many places at the same time, and most importantly they can have been delivered and still be in transmission at the same time.
Subsection 2(7) of RIPA is entirely clear on that point, a communication can have been transmitted and still be in transmission:
" For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it. "
Once you get your head around the idea that a communication does not behave like a physical object, it's all quite simple - but some people do have extreme difficulty with it.
Was the QC simply one of those with this difficulty? I don't know, and the question is clouded by the other politico-legal issues - if read messages are not protected by RIPA then the Police can do things which they cannot legally do if read messages are so protected, and they have plenty of motivation to want to accept "legal advice" which says read messages aren't protected, even if it's clearly wrong.
Perhaps it's all in the choice of advisor. The police choose who advises them - wasn't there a recent case where they employed a very dodgy pathologist in a case where there was obvious public interest and accusations of Police misconduct, or even manslaughter, and the most competent and upright pathologist available should have been chosen?
Same old nonsense, yet again.
RIPA section 2(2) says that an interception can only be of a communication "while being transmitted". Section 2(7) says:
" For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it. "
That doesn't say protection stops when the message has been transmitted - it fact it says exactly the opposite. The "advice" is plainly wrong.
A suitably stored message will be in transmission for s.2 purposes (defining interception) even after it "has been, transmitted ".
This "legal advice" is strongly reminiscent of the Watkin Phorm email - self-serving nonsense. So yes, I want to see it too.
re RIP RIPA?
Some of the powers under RIPA do only relate to investigations carried out by public bodies, but these are things like the Commission and appeals Tribunal. where "they" don't want some matters to be heard in open court - however the basics, like the offense of interception, apply to everyone.
Interception? An analysis.
TalkTalk have modified their network so as to make URLs available (to themselves, so they can do things with them). The exception for traffic data in RIPA ss.2(5) does not apply, as parts of URLs are considered to be content, not traffic data (generally speaking the parts after the third slash, but see RIPA ss.2(9)).
TalkTalk's action therefore falls under ss.2(2) of RIPA, and is thus interception. I don't think there is much doubt or wiggle room there, if any.
Next, is it lawful interception, or not? TalkTalk are perhaps in a better position than Phorm were, as they can argue that their action is lawful under RIPA ss.3(3), like virus or spam filtering of emails.
However unlike virus and spam filtering of emails, TalkTalk's action was not necessary, nor was it done, to protect the service - the web would still work fine [*] without it, while email would, or so it's argued, fail entirely if spam and virus filtering wasn't done.
TalkTalk's action would be made lawful by ss.3(3) if it was done "for purposes connected with the provision or operation of th[e telecommunications] service".
I think instead it was done in order to provide an extra service on top of the basic telecomms service, and thus s.3(3) does not apply - it only applies to the basic message-passing service (passing bits), see the definition of "telecommunications service" in ss.2(1).
So, in my opinion, yes, it's interception and yes, it's a criminal offense under RIPA ss.1(1).
[*] for some TalkTalk value of "fine" ... :(
Apart from the privacy issues which cause honest people to fill in false information etc,, no-one expects that a fraudulent website will have correct registrant information anyway.
Name ownership, now that's a different matter -but ICANN seem to have that sorted out, mostly.
@BruceWayne: The TVR does not contain the method by which the card was verified by the reader (unless verification fails). See the paper for details. If it did it would mean a cheap fix was possible, but I don't think there is one.
The Banks will probably have to replace all the cards and terminals, though they might get away with just replacing the cards and putting something in the IAD, which would however be a less-than-satisfactory solution
@Zerofool2005: Yes, it's not a new attack, but it was demonstrated well and for the first time in public here. I think Steven Murdoch, one of the authors, also did some of the earlier theoretical development.
I do agree that others (Chris Mitchell?) should have been mentioned for previously pointing out the possibility of the attack, but that's maybe because I pointed out the relay attack long before the Cambridge group did their paper on it.
However I wasn't going to demonstrate the relay attack, and hadn't published a paper on it, just posted it to a crypto mailing list.
In their papers cryptologists often take the attitude that it doesn't exist until it's published, which has some merit. But a mention in the footnotes or references would be nice. Otherwise it makes them seem to claim to have invented something when they haven't.
@Homard: TVR stands for terminal verification results, but that doesn't mean much unless you know the context. The paper contains the clearest description of the very complex chip and pin protocol I have ever seen, so if you want to know what a TVR and CVR really are I can only suggest you read it.
Signatures are not the problem
The problem with signatures is not verification by signature, but that chips are not used in some countries abroad and the system falls back to magstripe.
In the case of a cloned card the signature on the card will be done by the thief, and will not be the signature of the cardholder.
This attack is on stolen cards, not cloned cards.
Signature verification when a chip in the card is used is still reasonably secure, assuming the cashier checks the signature properly - the card is verified as being a real card by the chip, and the signature is verified by the cashier as normal.
Though of course nothing is secure, and attacks are still possible - but liability does not fall on the innocent cardholder here, as the signature can be checked, and the receipt can be tested for DNA, the cardhlder's fingerprints, and so on.
For the cardholder, signatures are more secure.
For the banks, they are about as secure if a chip is used to verify a card with a chip, though for a while there was a lot of fraud from cards lost in the post and signed by the thief - however this is fairly easy to defeat and has fallen to very low levels.
The Bank's reason for introducing the PIN was twofold: first to improve on the reliability of cashier verification of signatures by replacing them with an automated method, and second to allow unattended automated sales. The former wasn't a great problem, and the latter was scotched -a curious story.
The HO, I think it was mostly, didn't like the idea of unattended automated sales and were going to legislate against it, but the banks convinced the HO that not offering signature verification would contravene the Disability Discrimination Act (somehow!! - but somehow ATM's don't?) and that legislation against unattended payment wasn't needed.
Some petrol stations still use unattended payment (possibly breaking the DDA) but not many, as it's too easy to defraud and the stations are liable when fraud happens.
Re: Legal Interception
As you point out, Google needs the consent of both the sender and the intended recipient. In this case the sender is usually the user, and the intended recipient is the owner of the webserver, though sometimes it may be the other way round.
The user may have given consent (though not when he's switched the feature off) but the owner of the webserver most definitely hasn't. Speaking for my websites, I never will.
- Review Apple iPhone 6: Looking good, slim. How about... oh, your battery died
- Review + Vid Apple iPhone 6 Plus: What a waste of gorgeous pixel density
- +Comment EMC, HP blockbuster 'merger' shocker comes a cropper
- Moon landing was real and WE CAN PROVE IT, says Nvidia
- 46% of iThings slurp iOS 8: What part of this batt-draining update didn't you like?