Re: Encryption is not made "illegal"
"They" can't serve a technical capability notice on an app developer - only on telcos.
68 posts • joined 16 Oct 2009
"They" can't serve a technical capability notice on an app developer - only on telcos.
Somebody has been picking phrases out of a Bill without considering the whole. That can be very dangerous. The new law is pretty effing horrendous, but not for the reasons stated in the article.
Section 254 (5)(c): "The obligations that may be specified in regulations under this section include [..] obligations relating to the removal by a relevant operator of electronic protection applied by [..] that operator.."
First, if you aren't a "relevant operator" then it doesn't apply to you. Software developer? nope, you are not a relevant operator, it doesn't apply to you. Write all the crypto code you like (though beware ITAR etc when distributing it).
Second, any obligation on a "relevant operator" which amounted to Interception, ie if anyone new can access the plaintext after the obligation is complied with, would be illegal under s.3 .
What section 254 (5)(c) really means is that the SOS will be able to require operators to retain the capability to decrypt ciphertext, where they applied the encryption. So for example if they used TLS with a Diffie-Hellman Forward Secrecy suite, then they couldn't delete the keys, but would have to retain them in case they were required to comply with an Interception Order.
But that's about all it means. The SoS doesn't get the keys, or access to a backdoor (for domestic communications anyway).
However, note that 254 (5)(c) applies to "relevant operators", which includes quite a bit more than ISPs - it includes websites like Apple where members can communicate between themselves - but probably not  shopping sites, banks, clouds, etc which do not perform communications service functions.
 this is complex, and arguable either way - but probably not. However, that is something which should have been made clear in the Lords-Commons argy-bargy.
Actually, using carrier bags slows down the rate at which refuse degrades.
Which is a good thing, as it slows down the rate of CO2 production from landfill. So much so that the charge, and the subsequent decrease in carrier bag use, has actually increased overall CO2 production.
You do not want biodegradable carrier bags in landfill (unless you are the landowner and want to repurpose the land quickly), you want bags which will "last a thousand years" or more. Fortunately, most carrier bags are not biodegradable in landfill conditions.
The ostensible target may be comms providers - but the actual target is "relevant operators". It includes a whole lot of other things apart from internet and phone providers (and Apple and Facebook).
"Relevant operators" are persons who provide "any service that consists in the provision of access to, and of facilities for making use of, any telecommunication system (whether or not one provided by the person providing the service) [... including] any case where a service consists in or includes facilitating the creation, management or storage of communications transmitted, or that may be transmitted, by means of such a system."
That would include many commercial sites who use SSL/TLS. If you put a "contact me" link on your web pages, you are a "relevant operator". Gimme your SSL keys!
That's what the Bill actually says, if you read it carefully. Like RIPA, it is opaque beyond the point of obscurity, and it takes a lot of reading.
Good points? Only encryption which has been applied by a "relevant operator" is affected - at least until the Home Secretary makes regulations otherwise (which she can do).
Bad points? It doesn't do anything at all against the clued-up terrorist or criminal. It decreases security for legitimate actors and businesses.
BTW, things said in the Lords (or Commons), even by Government spokesmen, have approximately zero legal significance. What the Courts look at is the wording of the Act.
While they undoubtedly _have_ been doing it all along, it wasn't illegal - it was legal under RIPA etc. Of course RIPA was so obscure, few realised what it actually said.
Apart from some new measures, this is largely just RIPA etc in a new form - but the new measures are worrying ..
One new measure are ICRs - actually not entirely new, there was previous legislative provision for something similar, only it didn't get used in practice.
Although actually actually, the words ICR or "Internet connection record" do not appear anywhere in the draft Bill ... only in an accompanying explanatory note of no legal significance.
What the Bill really does here is give Treeza the power to say what traffic data is to be retained, and to order it to be retained. Note that she doesn't have to put this through Parliament as secondary legislation, she can just order it to be done.
Another new measure is to redefine traffic data, in part so as to include anything that gets sucked up in a search for traffic data. After the data is captured then there is nothing in the Bill to stop any extra data being used in the same manner as traffic data may be used.
A third new measure is to empower Treeza to to require "relevant operators" to retain a technical capacity to make data, including content, available. This would include requirements for modification of systems which provide end-to-end encryption, and/or data storage in eg smart phones.
She would in theory have to introduce secondary legislation to do that, but I can't remember the last time a SI was rejected.
A fourth measure is to extend the scope of what a single full interception warrant can cover - previously it was limited to a single person or premises, in the Bill it can cover a group - eg Muslims? Humans? - there does not seem to be any limit.
Of course, what the draft Bill does not do is what it is primarily supposed to do, ie bring UK legislation into compliance with the EU (digital rights ireland) and UK ( EWHC 2092) Court decisions. There are some nods in that direction, but nothing really concrete.
Does it apply to gubbmint spyware?
allows automatic bug-fix updates? crap. Ever had to wait for an unwanted automatic update to install over a slow network connection before you can use your computer?
I want to be consulted about everything which installs or modifies software on my computer, every time.
Just to put some rough numbers on it, the main cylindrical part of the device seems to be about 2 meters in diameter and 5 meters long; which would fit into a shipping container with a bit of space for other systems. That's a surface area of about 35 square meters, and a heat flux density of about 2.8 MW per square meter.
Sunlight is about 1 kw/m^2 at the equator at noon, so that's 2,800 times more power per unit area than sunlight. It is equivalent to a black body radiative temperature of about 2,400C, or 4,300F, about as hot as the filament in an old-fashioned light bulb.
But the reactor casing doesn't need to be that hot, indeed it can be quite cool.
The highest steady-state fluid-cooling heat flux density I know of in engineering occurs in regeneratively cooled rocket engines, which typically cool at about 10-20 MW/m^2, though there are examples going up to about 160 MW/m^2 - many times more than needed here.
So removing 100MW of heat from the device is not that great a problem, and certainly not an insoluble one.
What you then do with that heat, well that's not going to fit into a shipping container ... but just the reactor might, indeed would, if it works as advertised.
Pretty much correct, only they drag up one end of the cut cable to a position at least the depth of water away from the break, splice in a new section, then move twice the depth of water and drag up the other end at aposition at least the depth of water away from the break, and splice that broken end to the new section.
The new section will be quite long, at least three times the depth of water, and once it is lowered into place there will be a loop of loose cable on the seabed.
If they tried to raise a cable in the middle without cutting it it would break when being lifted, the added tension would be far above the capability of the cable.
When being laid a part of the cable which starts at position x at the surface then falls in a curve into a position on the seabed some distance away from x.
Sorry if that's not clear, a diagram would make it easy to understand, but describing it in words is trickier.
Yes, pointless mortality.
"experimental stuff is built with as much safety as it is possible" - except that VG weren't doing that, they were desperately trying to pretend their nitrous hybrid technology works, while most rocket engineers have said it isn't suitable for human, never mind passenger, flight.
In that process they threw the already-limited (this is not their first fatal accident) safety measures they employed out of the window - for a start, there is no way a new fuel grain should be tried out in a piloted flight.
The safety culture was and is wrong, PR flacks overriding the engineers and safety people - the vice-president in charge of propulsion, the vice-president in charge of safety, and the chief aerodynamics engineer have all recently resigned.
They used aircraft-technology safety techniques, which do not work with rockets.
And the people who make the decisions do not understand rocket science.
Of course it hasn't been ruled out - the aliens in question being the space dragons .. or the symbiotic germs, who have devolved from a previously intelligent state. Or some other as-yet-unseen symbiotic aliens, who were kept inactive by a war with the germs. Or..
There is only one conductor - the copper tube in the diagram. One end is kept at say 10,000V (compared to a big copper ground plate in the ground) and the other is at 0V, connected to a big copper ground plate in the ground.
The copper tube is fairly thick in everyday terms; but considered as an area/length ratio, it is very thin, and its resistance is therefore considerable. The current in the copper is only about 2 amps.
The voltage across each repeater is only about 30V, though the voltage at each end of a repeater might be 9,910V and 9,940V. compared to ground.
Suppose there are 100 repeaters. Each repeater uses say 60W. and takes 30V at 2 amps; at 10,000V the remaining 7,000V drop is lost to the resistance of the copper tube.
That's maybe 15 years out of date; I think a modern cable uses a bit more power per repeater and less repeaters, but it should give some idea. Al, some use double-ended power supplies.
Another method is the teeny drop of hydrofluoric acid (HF). This needs to be automated to be hard-to-detect.
You remove the cover of the fiber, then cover it with wax, leaving a teeny hole on one side. You set up the HF-resistant optical tap pointing at the hole, then immerse it and the cable in hydrofluoric acid, which eats the outer layer of the fiber away, all the while monitoring the light which escapes from the fiber. When you get just enough light so you can read the traffic, you neutralise the hydrofluoric acid.
( The eventual clear plastic replacement for the HF liquid needs to have the same refractive index as the HF - but it is easy to change the RI of the HF )
This had the advantage over bending that the tapped light comes from a teeny source, making it more efficient and thus harder to spot. Also, you don't need to create enough slack to bend, which can be a problem in underwater cables.
Yes. Plus there is no service interruption if the cable is bent.
The USS Jimmy Carter deploys ROVs to find and expose the cable. Then they lower a shirt-sleeve-environment tapping room on a wire to the tapping point (why would they need to tap the cables inside the submarine?), and Vodaphone / Verizon / whoever subcontracters do the actual bending and tapping.
Best place to bend-tap is just after a repeater where the signal is strong and the bend-tap is least likely to be noticed. Best repeater is probably the first in the chain, which will be nearer land collection points and at lower depth.
Looking at a map of cables, two interesting places for GCHQ/NSA to tap cables - cables run by furruiners whose traffic they might want to look at, and which could not be tapped by a UK/US legal requirement - are in the Eastern Med and the Gulf. They would want some sort of land stations, like the GCHQ outstations at Seeb in Oman and Ayios Nikolaos in Cyprus, nearby to make backhaul easier.
As for NSA, their interests might also include Fortaleza in North Brazil, the British Virgin Islands and a couple of locations in the China sea and Sea of Japan. I imagine BVI would not be a problem, I do not know whether they have anywhere suitable for the other locations.
But they might use buoys for backhaul instead - a highly-directional high-capacity laser transmitter from a buoy which is raised say once per pass to a satellite could be made almost impossible to spot. The part which gets raised above the surface might not need to be more than an inch or so in diameter.
The USS Jimmy Carter then changes the very large batteries left on or buried under the seabed every ten years or so.
If I may quote Robert Morris, former Chief Scientist at NSA: "Never underestimate the attention, risk, money and time that an opponent will put into reading traffic." I think the author of the article is guilty of that.
AFAICT, the recent Apple cloud leaks were caused by a password-guessing attack. In order to guess a password, the script tried 500 or so passwords for each username.
Now if Apple had been monitoring failed password attempts, and stopped repeated failed attempts, especialy when a whole bunch of them for different usernames came from one IP location, this would not have worked. Apple were not using passwords in the right way.
AFAICS, Apple have now started to do this, which is why and how the attack has stopped.
Another method to defeat such attacks might be for the login username to be different from the public username, making it hard for an attacker to guess a login username.
More, if Apple had emailed the celebs saying that there had been several failed password login attempts, especially those from unusual IP addresses, and the celebs had said "I didn't do that" then Apple could have been on an especial watch (and could probably have caught the attackers).
Don't get me wrong, password are a totally shit method of identification, and a really bad method of authentication. But my banks use them online, along with other methods: one (Lloyds) sensibly, one (Tesco) in an overly paranoid manner which actually detracts from security.
And like PIN passwords for debit and credit cards, if used correctly online passwords seem to work well enough for money.
If I make repeated failed password login attempts to my banks they lock me out, and want me to contact them. Very sensible, if annoying. However yesterday I forgot my itv player password, and made several wrong attampts to log in - and got shut out for 30 minutes. I mean, WTF?
Passwords are useful in their place, sometimes with added password-type or other security when needed, sometimes not, Sometimes they are used in stupid ways - why does ITV Player need me to login with a password anyway?
Passwords cannot usually protect against coercive attacks, but for everyday use where they are used appropriately and monitored suitably, they are still the worst - apart from everything else.
The real problem is that people do not use them appropriately.
C'mon, el Reg, these are commands, not requests.
FYI, "relevant" data is defined in ss.2(3) as data in the Schedule to the The Data Retention (EC Directive) Regulations 2009, which is exactly the same data they could demand retention for before the ECtJ judgement - so now new classes of data. Let's not get overexcited about that.
I don't know why that was hidden away in ss.2(3), but a lot of people have missed it. The Bill does not permit the required retention of any new classes of data.
There are some real power grab issues, but that's not one of them.
However you are right about there not being a time limit on the retention of data under a ss.2(1) Notice - only a 12-month limit on data retained under some putative future Regulations to be made under ss.1(3) (which can't be implemented until after Parliament gets back anyway). A Notice, on the other hand, can require an ISP to retain data forever.
I discard outright any possibility of it being an outside website hack - too hard, an attacker would need access to the TC website, the Sourceforge TC site, and to the code signing key.
The "Warrant Canary" theory doesn't seem to make a whole lot of sense either. It's possible, but why recommend BitLocker? When did someone have time to write all those code changes between being served the warrant and having to execute it?
The theory which makes most sense to me is that it was an at least partly commercially-motivated self-takedown by the devs.
The recent change in name on the otherwise "same old code and binary signing key" is possibly significant here - the developers, or perhaps just some of them, may want to start up a commercial product in the new name.
Their commercial aspirations are well-known, witness the previous license issues, the failed crowdfunding and donations campaigns, the "TrueCrypt Developers LLC" registered in Nevada (thanks to Piergiorgio Sartor for that info). And they already own a good chunk of the the IP rights in the TrueCrypt source.
The ending of the project was graceful, to some extent at least - people were not left with unrecoverable archives, and temporarily acceptable but not-as-good alternatives were suggested. A whole lot of work went into that.
It is obvious that this wasn't done in the heat of the moment - it must have taken at least several weeks to do the code revisions for the 7.2 release. There have also been hints (eg the robots.txt file) for about six months that something might be happening.
The only reason I can think of for doing all that work is maintaining reputation (or technical reputation at least - TrueCrypt devs are not exactly known for being people people, or for being particularly into "free open source" either).
No reasons why the code is/may be broken are given. Actually the "WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues" does not even actually say TrueCrypt is broken, just that it may be.
And the unfixed issues might be fixed later, in the commercial version.
Which would have been independently audited... at no cost to TrueCrypt...
I have seen nothing to suggest that the people who watched and "encouraged" believed that they were looking at a real person.
First, the $280 million budget of the BULLRUN "dirty tricks" program does not include the cost of the "advanced cryptanalytic capabilities" NSA is developing. We don't know exactly how much NSA are spending on that, but the combined NSA and US armed forces cryptanalytic budget is said to be just over $10 beeeelion.
RC4? well, it ain't that great but - the NSA have lots and lots of encrypted traffic they want to decrypt. It comes in chunks called sessions - roughly, the time you "are connected to" a single website - and each session has a different key.
If the NSA had a method to break RC4, they would have to break it again and again for each session. That's a huge amount of work. There are some other problems too, about obtaining the needed plaintext - you can't expect to break a RC4 session key from just examining the ciphertext, there isn't enough of it. You need a crib. Not impossible, but again it's a lot of work.
It would be far more effective to attack the mechanisms by which the session keys are set up - mostly RSA, though people sometimes use ECDHE instead. The big websites only changed their RSA keys every couple of years. Break one of those and you can easily calculate several million, or even several billion, session keys.
Personally I think they may well have found a method to break RSA - each break might be expensive, but as I said they can get millions of session keys from a single break. They may have a method to break, or partly break, ECDHE instead or as well, but my money is on RSA.
And it doesn't have to be RSA-2048 either - there are petabytes or more of old ciphertext which NSA would love to decrypt, collected over many years, which was protected by RSA-1024. Heck, until a few weeks ago the vast majority of internet SSL/TSL sessions were only protected by RSA-1024 or equivalent. I think it's still well over 50%.
I haven't heard the Independent claiming they have had sight or have copies of the Snowden documents, just that "information on its activities was contained in the leaked documents obtained from the NSA by Edward Snowden".
However I expect it's all just some sharp-eyed reporter on the Independent repeating claims from an article in the Guardian on 21 Jun: http://www.theguardian.com/uk/2013/jun/21/legal-loopholes-gchq-spy-world which mentions the GCHQ support station in Cyprus (Ayios Nikolaos Station), and says they tap cables etc.
As that earlier Guardian article is at least partly based on the Snowden documents, I suppose the Independent isn't lying when it says their article is too.
Yes, or at least cryptologists and cryptographers generally think so. And if NSA can crack the ciphers in use today, there are better ciphers which they can't crack - those ciphers aren't used because they are expensive to use, not because they are secret or illegal or less insecure.
Though in fact, unless you are a real terrorist, it probably doesn't matter whether NSA/GCHQ can crack your codes.
WOT??? you say???
Think about it. The only way being able to crack ciphers is useful to the crackers is if some people think you can't crack them, and then those people use the ciphers to send messages whose content they want to keep secret from the crackers.
So if NSA/GCHQ can in fact crack AES-128 (unlikely) or RSA-1024 (just about possible), they aren't going to tell anyone they can. This includes everybody except maybe the top secret terrorist catchers (or maybe today's Watergate people if you are a cynic), but it most definitely doesn't include the FBI/Police authorities who deal with everyday crimes like drug dealing, kiddy porn, or murder.
Google "Churchill Coventry ULTRA" for an example of this.
Now NSA/GCHQ may attack eg hidden services using worms, and get a little upset when people find out how they did it - but if they can crack AES-128 (and again, I don't think they can), they aren't going to expose that capability for anything less than a 9/11 or nuclear attack. And possibly not even then.
Oh dear. Expecting a source to compile GnuGPG and Truecrypt is daft at best, and injurious to sources at worst. Tor is vulnerable to a global adversary like GCHQ. Hidden services are vulnerable too - they only take one crack to break.
I suppose if El Reg doesn't know how to protect journalists and sources then the Guardian can't be expected to either, but it really isn't rocket science.
Start with the paper's website. Use SSL/TLS as default there, with a DH suite like DHE-RSA-AES256-SHA with a 1,536-bit RSA key. That means when people read the paper the communications are encrypted, and the use of a DH suite means that they can't use RIPA to force you to give up the paper's private key.
Now add random-length cover text - this means that the sizes of files are also obscured, so CGHQ/NSA can't say "that file is 45,678 bytes long, it's the image on page 32". Also add random-length traffic from the reader to the paper.
Now add a dropbox, and anyone can send files to the paper in encrypted form, so someone who is tapping the internet can't read it, and it's in the midst of a whole lot of other traffic. A dropbox per reporter is good - maybe beside the byline?
You will probably get lots of rubbish, which is all good cover - you don't have to read it.
Arrangements for reporters' communications, as opposed to sources or members of the public, are similar.
A full analysis will be £1,200.
As far as I can see what reportedly happened to Miranda most certainly wasn't legally sound.
Apart from some Northern-Ireland-specific stuff, the only purpose for which a person can be questioned (or detained) under Schedule 7 of the Terrorism Act 2000 is "determining whether he appears to be a person falling within section 40(1)(b)" ie, whether he appears to be "a person who ... is or has been concerned in the commission, preparation or instigation of acts of terrorism."
It seems Miranda's detention came nowhere near to falling under that.
There may be other laws under which he could have been questioned and detained, but not under Schedule 7 of the Terrorism Act. IANAL.
If you can crack it you get the key - which may be used for other files which you don't have.
I would hope the authors (none of whom seem to be crypto mainstream) are not responsible for the puffery in this article. It's perhaps an advance in terms of coding theory, but not cryptography (which is completely different).
Cryptographers already know about low entropy in plaintexts and passwords. They don't often consider uniform sources, and they hardly ever think asymptotic equipartition to be relevant.
In real life, when considering resistance against a brute force attack, cryptographers typically assume that a plaintext is known, and therefore has entropy of 1. For advanced situations, they assume a chosen plaintext with an entropy of zero.
These are the main forms of theoretical brute force attack on a cipher. Some are pretty unlikely in real life, but a cipher which is not resistant to all of these will be rejected out of hand.:
Ciphertext-only attack - the attacker has only the ciphertext and what he knows about the sender - eg he speaks English - to help him.
Known plaintext attack - the attacker can find the plaintext for one message, and wants too find the key so he can decrypt more messages sent with the same key.
Chosen plaintext attack - the attacker can trick the sender into encrypting a message of his choice. In some cases this can be more useful when trying ti find the key.
Adaptive chosen plaintext attack, where the attacker choses a plaintext and gets the sender to encrypt it, then can choose another based on the resuylts iof the first encryption and trick the sender again. And so on.
Chosen ciphertext attack - the attacker can get the recipient to decrypt messages of his choice. Again he wants to find the key.
Adaptive chosen ciphertext attack - as in the adaptive chosen plaintext attack above but with ciphertexts.
I agree, a contract would not trump RIPA - but no offense under RIPA has been committed. Here is section 2, subsection 5 of RIPA:
"References in this Act to the interception of a communication in the course of its transmission by means of a postal service or telecommunication system do not include references to—
(a) any conduct that takes place in relation only to so much of the communication as consists in any traffic data comprised in or attached to a communication (whether by the sender or otherwise) for the purposes of any postal service or telecommunication system by means of which it is being or may be transmitted; "
If the data you are giving out is "traffic data" (as defined elsewhere in RIPA, see my next post) then giving it out is not interception, and therefore is not an offense under RIPA.
Sorry. but it isn't illegal under RIPA .
The data for sale is classified under RIPA as "traffic data", and you can do anything you like with that without it being classified as interception (see RIPA s2.(5)) - consequently the order you mention does not apply, as no interception (as defined in RIPA - which is not even close to the everyday definition) has taken place. :(
It might be covered under the Data Protection Act, which only covers anonymised or aggregated data if the anonymisation or aggregation is not reversible - and many people think it is almost impossible to do that irreversibly.
However, even then it is not a criminal offense under the DPA.
Heck, it isn't even a criminal offense under the DPA to sell sensitive personal data. It's just a "breach of duty", and the civil penalties are paltry.
Cesium-133, as used in atomic clocks, is not radioactive (and afaik it's not "nuclear material", whatever that is).
T-Stoff was 80% or 85% (there were several versions) hydrogen peroxide stabilised with less than 1% oxyquinoline or more often phosphate. The rest was water.
Yes I know that Wikipedia says T-Stoff was 20% oxyquinoline, but Wikipedia is wrong. 80% peroxide with 20% oxyquinoline is a highly sensitive - breathe wrong and it goes off - high explosive.
Or it would be, if anyone was insane enough to try and make it.
The other half of this (which is still only half of the problem) is that it is good to have a match composition which doesn't produce much gas - for instance, for the silicon match described the desired reaction is solid_oxidiser plus silicon gives solid silicon_dioxide (aka sand) plus spent_solid_oxidiser. There are no gaseous products.
We need some gaseous products though, to spread the hot molten extra silicon about. That's not hard though, as silicon plus solid_oxidiser doesn't burn nicely for any normal versions of solid_oxidiser, and the usual compositions necessarily include some gas-producing fuels.
But match compositions for vacuum use can't produce too much gas, or they will burn too cold, and the spray of hot liquids will be cooled so they are not hot enough to ignite the fuel grains, especially if the fuel grains are very cold.
And here we come to the other half of our overall problem, which our gallant 'nauts may face soon. Even if the fuel grain is ignited - it may then go out. The cooling from the expansion of the gases from the ignited face may be enough to cool the burning surface of the grain to the point where it cannot ignite the next layer of grain below.
By the way, if you doubt that cooling by expansion can do that, have a look at this:
It's the exhaust of a liquid hydrogen / liquid oxygen rocket engine, which was about 6,000 F when it was burnt, and which is below freezing point by the time it expands enough to get to the nozzle exit. So much so that icicles develop on the end of the nozzle.
Transfer of heat from a match to a fuel grain is not usually done by conduction, but by hot gas.
Burning match composition produces hot gas initially at the density of the composition - in other words the gas would be at what we might consider a very high pressure. The gas then expands according to it's surroundings, very high presure if it's tightly contained, and lower if it is open to some surrounding atmosphere.
If the surrounding gas pressure is very low then the gas expands more than it would do at atmospheric pressure - and the important point here is that when gas expands it cools. At lower pressures it expands more, and therefore cools more. The gas which is supposed to ignite the fuel grain is now cooler, and also less dense (so it has less heat-carrying capacity), than it would be at atmospheric pressure.
The standard way to solve this is to use medium-grain silicon powder or something like that in the match. The outsides of the grains of silicon burn in the match and get hot, and red hot globules of molten silicon (note it's a liquid, not a gas, so it doesn't cool by expansion) splash over the surface of the fuel grain, igniting it.
Thermite would probably work just as well, the hot iron droplets produced would not cool by expansion as they are not gasses.
Incidentally IIRC (and IANAL) I think it would be legal to do this in Spain, but it would not be legal to make a thermite match composition in the UK. However if you wrote to the HSE they might well give you a dispensation, they are quite good about that sort of thing
I do not know whether a Cyberoam-type box could be used to let a bad guy read plaintext ssl traffic, or if so how easy it would be to do, even if the user's browser was set to accept Cyberoam-signed certificates and the box was operated by a bad guy who had access to the wires. It should not be possible, but ..
Just having Cyberoam-signed certificates accepted is not enough to be able to read plaintext - in order to do that the bad guy needs to know the secret key to the public key in the certificates, and the secret key is held "securely" in the box.
Without the secret key, even if a certificate is accepted, an authenticated ssl session cannot be set up - it is used to prove that whoever the user is talking to is the server on the certificate, as it knows the secret key to the public key in the certificate.
The box itself can read ssl traffic plaintext, but the plaintext content should stay in the box and be protected by the "secure hardware" which also protects the certificate key.
I do not know whether this is actually the case, never having seen inside a Cyberoam box - I rather suspect that some diligent digging inside the box might give access to plaintext traffic, but that it would not be straightforward. That's just a guess though.
However the box could be set to eg detect codewords and reject comms containing them, and I'd expect it would also alert the box's operator, so reusing CA keys is a definite hole.
It is interception (as defined in law - the legal definition bears little resemblance to the everyday meaning) as both parties have not agreed to it, but it isn't illegal under uk interception of communications legislation. It's a bit like scanning emails for viruses - in fact it's almost identical to scanning emails for viruses.
The relevant law is subsection 3(3) of RIPA:
"Conduct consisting in the interception of a communication is authorised by this section if—
(a) it is conduct by or on behalf of a person who provides a postal service or a telecommunications service; and
(b) it takes place for purposes connected with the provision or operation of that service [...]"
The cyberoam box is operated by the office 'net, who are providers of a communications service (as defined), so a) is complied with.
The purpose of the interception is to protect the office network from viruses etc, and the office 'net might not operate if the protection was not in place, so b) is also complied with, and the interception is authorised and therefore lawful.
Well DUH! again, but this time it's on me.
I suggested that a variable in-use key should be used, but that the CA key could be kept the same for different users. And of course that doesn't close the hole, the CA key should be unique to each office/subnet.
Suppose executive Bob sets his browser to accept keys signed by Cyberoam CA key "office". He then takes his laptop to a different place, where a bad guy has set up a Cyberoam box he bought on eBay to intercept traffic. Bob's browser will accept certs signed by the bad guy's bought-on-eBay box.
Getting from there to plaintext traffic may be easy or hard, depending, but it would at a minimum allow eg keyword detection. It's a definite hole.
Of course a shared key is also a hole for the usual reasons as well, eg someone at Cyberoam knows it, and it may be lost, demanded, sold - only people you trust can betray you, and there is no need to trust Cyberoam with the key.
Also "secure hardware" has been broken into before now, someone might spend 100 million to be able to break into a box he bought on ebay - and then he gets the key to all the kingdoms, whereas if a unique key is used for each office he gets bupkis.
And so on ...
Well, duh! Twice!
Well Duh! twice - first, if TOR didn't realise that's how those things work , then they ought to have. Actually I'm pretty sure I have heard people from TOR mention it, several times, so maybe they just forgot.
Well Duh! again though, to Cyberoam, for reusing the same key. That is actually a pretty major security hole.
It can be fixed fairly easily, but maybe only by changing out the boxes. The in-use private key  should be internally generated, changed at intervals, and never leave the box..
 Cyberoam MITM ssl sessions in order to detect viruses. Cyberoam boxes are usually connected between the wild wild web and the office subnet. That's about the only way virus detection can be done on a subnet basis and still allow ssl connections, and it's how everybody else does it too.
To do the MITM they need to use a Certificate which is generated in the name of whoever the user is connecting to, so the user's browser thinks it's actually a genuine certificate from the website the user is connecting to. In order for the certificates to be accepted the user (or the office BOFH) has to set their browser to accept certificates issued by Cyberoam.
 which need not be the same key as the CA key; there are advantages to having the certificate key the same , eg it makes it easier to install Cyberoam as as CA or trusted Certificate Authority, necessary for it's proper operation. The CA key should then be used to sign the in-use key in the faked certificates.
My first comment was a bit long, and part of it may be hidden - there's a link at the bottom left to "expand comment", and display the rest of it. The answer to your question, and more, is there.
In the first instance, 5 MW of electric motors driven by ship's power rewind the springs in 20 seconds (the carriers have 40MW turbo-electric drives). Or you can use a lot of sailors on windlasses, or a diesel engine, or whatever you like really.
Lets see. The A-10's GAU-8 "CEP" from 4,000 feet is 80% in 20 feet radius. The GBU-39 (the smallest smart bomb, not widely deployed as yet) has a 50% CEP of 25 feet, with a further 100% lethal blast radius of 26 feet, and lethal shrapnel out to who knows where.
If the A-10's pilot is reasonably good, and especially if (s)he knows where I am as a friendly, I'd much rather be 100 feet from the aim point of an A10 gun strike than 100 feet from the aim point of even a teeny tiny smart bomb.
Agreed, only some A-10s are kitted out with the "newfangled deelies" , but it's a known upgrade. Which also includes the ability to drop smart bombs accurately, if that's what you really want to do...
In a carrier-based role, I'd like to give the A-10 a bit better air-to-air capability, but I'd also put some air superiority fighters (and 2 or 3 Hawkeyes, and a COD) on the carrier, for local defense if nothing else. But I don't know how much it would cost to modify an A-10 to fly from a carrier.
I also don't know why it's better to drop a bomb from 30,000 feet than a few hundred or a few thousand feet, but that's another story.
Oh, and before you ask - the reason for having two springs per catapult is so you can wind one up this much, and wind the other up that much, and thereby vary the total thrust produced on the wire, to adjust for different loads and different aircraft.
It's not as sexy as an electric catapult - but it's a whole lot cheaper, and a whole lot more reliable to boot.
My tongue was not (entirely) in my cheek, nor even close - the purpose of the fusee (as in a watch or clock, or even some crossbows) is to get the near-constant thrust profile right. It's easy to do, and you can get any thrust profile you might want.
From the wikipedia entry for fusee: G. Baillie stated of the fusee, "Perhaps no problem in mechanics has ever been solved so simply and so perfectly."
I don't know whether it would be possible to launch A-10s from carriers, but that would be - something!
Air supremacy is good, and even necessary, and light bombers and other ground attack aircraft are also useful, but for CAS there's nothing to beat the A-10 - and CAS wins battles.
I had initially thought that the F-35Bs could be dismounted from the carriers and used in another, Harrier-like, role: eg in a zero length field situation from a clearing in the jungle, or the Falklands.
However I find they need a ski-jump to get any significant weight of bombs - F-35s are primarily light bombers - into the air from a zero length field. They would have to operate from ski-jump carriers or bases with runways, with a fairly short range and considerably lower (than eg a F-35C) bomb load from runways.
F-35Bs would be of almost no use if dismounted in zero length field situations, and of very little use in runway tactical/strategic situations unless the runway was unusually close to the action.
So, on to electric catapults which cost billions - or rather, not on. But wait-a-bit, why electric catapults?
A 30,000 kg aircraft needs to be accelerated to takeoff speed in about 1.5 seconds - say 65 m/s from the catapult and 10 m/s from the aircraft's engines, for a total of 75 m/s (or 145 knots). The aircraft accelerates at 5G maximum, the catapult path is 56m to 75m (about 250 feet) long.
That's somewhere around 75 MJ of kinetic energy supplied by the catapult. I agree that an electric catapult of that energy and power would be expensive (although £2 billion still sounds a bit OTT), but what is really needed is a simple big spring - or rather two 30 ton contrarotating springs with a fusee, in a housing, with a total weight of around 200 tons. A 5 MW electric motor winds the springs up in 20 seconds from ship's power.
Well-designed springs can deliver the required 1.25 MJ per ton with comparative ease - for instance, that's quite a bit less work than a car suspension spring does (when was the last time you heard of one of them breaking, in normal use?), and it would have to do it far less often than a car spring.
The whole spring/motor assembly including mountings would weigh about 320 tons, and it would be quite dense, so it might usefully be mounted near the bottom of the ship providing some ballast, and the energy transferred to the flight deck by cables in ducts.
Similar energy/speed fast-moving cables in ducts are used everyday in the arrestor gear of aircraft carriers. The spring assembly could of course be mounted almost anywhere in the ship, as required.
You might want two seperate housings, one for each of two catapults, mounted in different locations, with independent cables and ducts to each catapult, for operability, combat damage etc reasons.
In extremis, you could probably make one from old car suspension springs from the scrappy - 60 tons of springs, £70 per ton, that's £4,200. And no, I am not kidding. Lose ship's power? Put five hundred burly sailors on the windlasses, if they still have those aboard, and they'll rewind them in about 12 minutes.
Billion-pound electric catapults? Pah!
It is not unusual for US law and US Courts to claim jurisdiction anywhere in the world, eg they do this over the taxpaying requirements of US citizens.
Microsoft's statement is probably true in terms of US law, but it isn't quite as straightforward as it might seem.
I imagine it goes something like this: Suppose a US Government demand fopr data is made, and a Court order is made. The US branch office cannot obtain the data themselves, and they ask the UK office. The UK office says no.
What can a US Court do to enforce the order? A very long story, but in the end, nothing substantial. So while they may claim jurisdiction, it doesn't mean much.
To address the wider issue, what Microsoft are _really_ upset about is clouds. First, some law:
Data Protection Act, Schedule 1 part 1, principle 7:
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.
Data Protection Act, Schedule 1 part 2 section 11: Interpretation of the seventh principle,
Where processing of personal data is carried out by a data processor on behalf of a data controller, the data controller must in order to comply with the seventh principle—
(a) choose a data processor providing sufficient guarantees in respect of the technical and organisational security measures governing the processing to be carried out, and
(b) take reasonable steps to ensure compliance with those measures.
Another bit of law, about the WTO, but I don't have details to hand - if measures are taken by one country for the purpose of providing data security, they are not actionable under the WTO, even if they restrain trade etc.
And what it comes down to is this: Microsoft say that encryption and their "best practices" provide better security against unauthorised processing than let's say only keeping the data in a local office.
(the data controller is the only person capable of granting authorisation, as the requirement to follow the principles is upon him and no-one else, that's DPA section 4(4) I think offhand).
Which, if Microsoft were correct about the US Government's ability to demand data, would be immediately obvious nonsense - rather than the slightly-less-obvious nonsense it is.
(a UK data controller is required by law to protect personal data in his control against the US government as well as spammers and identity thieves. He's also required to protect it against the UK Government, who if they want it must get it through him).
It's long past time that the UK (and EU/EEA) Information Commissioners gave clear guidance that personal data cannot be stored in clouds. Full stop.
Whether the Police looking at a Blackberry archive is interception or not is actually very much more complicated than what has been said here. I won't go into the gruesome details, but it could go either way.
However, even if it is interception, the Police can still do it. According to RIPA S.1(5)(c) interception is lawful if "it is in exercise, in relation to any stored communication, of any statutory power that is exercised (apart from this section) for the purpose of obtaining information or of taking possession of any document or other property".
The police would be looking at stored communications if they looked at a Blackberry archive, presumably using a statutory power under PACE. The question of whether it's stored so the intended recipient can access it or not doesn't come into it at all.
That may be an important distinction for journalists and ordinary people, but not for the Police - if it's a stored communication they can access it under PACE, no matter whether doing so is interception or not.
There is another question to be looked at though - Would Blackberry keeping an archive of messages be interception? Undoubtedly, yes it would be, see S.2(2)(b).
Would that interception be lawful? S.3(3)(b) says it would be if "it takes place for purposes connected [...] with the enforcement, in relation to that service, of any enactment relating to the use of postal services or telecommunications services."
Afaik there is no enactment forcing Blackberry to keep Blackberry messages, so it's probably illegal for Blackberry to keep an archive [*]. But not the Police to look at it.
[*] unless it's "for purposes connected with the provision or operation of the service" - eg an archive which intended recipients can access. However if Blackberry started keeping an archive specifically so it could be accessed by the Police then they would be intercepting, and doing so illegally.
Albion's Balloon Launched Aircraft.
Not really SF, no "M", and eeeewwwww!
But it'd be filmable.
Biting the hand that feeds IT © 1998–2017