Because if you think you have the message, you probably won't bother digging any further.
Cryptography is 'becoming less important' because of state-sponsored malware, according to one of the founding fathers of public-key encryption. Turing award-winning cryptographer Adi Shamir (the S in RSA) said the whole basis of modern cryptography is under severe strain from attacks on security infrastructure such as the …
Because if you think you have the message, you probably won't bother digging any further.
In the Second World War if you had good crypto protecting your communication you were safe. Today with an APT sitting inside your most secure computer systems, using cryptography isn't going to give you much protection.
Not a fair comparison. How good would your WWII crypto be against a bug or a double agent right there in your office?
@AC15: It wouldn't be at all good against that. Which is exactly the point: Shamir is arguing that it is much easier to insert a virtual bug today than it was to insert a physical one back then.
I find it useful to distinguish between: 1. "Encounter" (the user somehow comes upon malware), 2. "Insertion" (malware getting through outside defenses), 3. "Operation" (malware executing), and 4. "Infection" (malware entering persistent store to run in later sessions). Given our decades of experience, it seems unlikely that we can prevent the Encounter or Insertion, which of course will allow Operation for the remainder of that one session. However, it is within our power to reduce the probability of Insertion, and also to almost eliminate Infection.
If we were to somehow start with a clean system, and started a new session only to work with a particular site, and then ended that session by turning the equipment off, there is little likelihood of Encountering malware, and little time for it to work. Each session would start anew. But systems with hard-drives (and flashable BIOSes) tend to *accumulate* infection, so that, eventually, we expect those systems to *all* be infected. I strongly dispute the idea that it is possible to maintain security while having infection at OS or BIOS levels (or below). The problem is the bot(s), and the solution is to not have a bot.
Now, if we can gird our loins to somehow forego the seductive attractions of frequent persistent change in our system; if we can use a semi-static OS and BIOS (etc.), then every time we start a new session, we will be clean. By not saving infection, we can avoid bot-like malware, the worst of which which generally requires Infection.
Ideally, our OS and hardware designers would make our systems "difficult or impossible to infect," which they clearly have not done. Perhaps they have their own reasons. However, on our own, we can: 1. remove all hard drives (and flash drives) from systems intended to be online-secure, and 2. use hardware which contains only a single BIOS (etc.) which can be re-flashed (and, thus, re-secured) in one go. (It may not be possible to re-flash both the motherboard BIOS and (e.g.) a video BIOS without the other gaining control.)
We can work online by loading the OS into RAM from DVD, then removing the DVD and running from RAM (as in various Puppy Linux varieties). (In contrast, booting from a USB flash drive essentially returns to the dangerous writable boot drive concept. The mechanical aspects of DVD writing make malware writes slow and apparent, and then *impossible* with the DVD removed. A USB flash drive with a write-enable switch inevitably becomes as weak as a normal flash drive the instant the switch is flipped for update.)
Working in a thin-client environment does require some adaptation, but actually is much easier than one might think. For general browsing, I like thin-client Puppy just fine. For online banking, I do a full power-off reboot before and after that session. For watching Netflix, I use the insecure hard-drive-based (and probably infected) Win7 system in the living room, just like everyone else.
What would you rather have, an agent who can read some of the messages and maybe send a summary, possibly a copy of some of the messages? How about a copy of all of the messages, without worrying about your source getting caught and turned/shot?
Wrong, that's exactly not the point. The actual point would be - just because cryptography appeared, have we stopped using physical security, safes and vaults, security agents and such? That's not what it looks like from here. I agree malware does shift the rules we play by, but in way means crypto isn't indispensable / useful anymore. More than ever, it very much is. And sealing off the section you do your cryptography in from the section that's under potential malware threat is not impossible to do - as long as you take the trouble to explain your staff that plugging in USB drives "found on the street" is not acceptable...
" if we can use a semi-static OS and BIOS (etc.), then every time we start a new session, we will be clean"
Certainly the writeable boot drive and a locally rewriteable OS is a fundamental security flaw that encourages APT. This could be dramatically mitigated if security is a priority, but most companies simply aren't sufficiently motivated to do - they continue to use Windows, despite its appalling track record on all matters of security, and then act all suprised when they get hacked. They could use a (not perfect) but far better secured *nix environment with acceptable productivity solutions, but that's still too much like hard work.
In a really secure environment there's surely no value in allowing users to change settings, and the absolute modicum of OS data (if any) that does need saving should be treated as data, not as part of the OS. And IMHO there's no good reason for allowing the user or the system to set new (or rewrite) executables or associated files ever. All this could be done, and done fairly easily if security is your priority. Clearly it isn't for many government, defence, commercial and intelligence operations.
Note that like all security measures, having a clean boot OS simply makes APT attacks more difficult. It makes no difference to the desire to compromise systems, or the resources being deployed to achieve that, and you therefore have to then consider your antagonist's plan B, C, D, etc.
"In the 20th century if you wanted to know the plans of Hitler during the Second World War you had listen to the communication and break the crypto, this was an NSA-type operation."
Actually, Bletchley Park and Enigma to the contrary notwithstanding, arguably the most vital espionage of WW2 was done the old-fashioned way. The Red Orchestra and especially the Lucy spy ring (https://en.wikipedia.org/wiki/Lucy_spy_ring) kept the Soviet high command supplied with a steady stream of up-to-date information about the decisions of their German opposite numbers.
Indeed, it has been convincingly argued that the German defeat in the climactic battle of Kursk was partly due to Hitler's insistence on planning every detail at his own headquarters (ironically in the name of security). Everything that happened there was known to Stalin within hours, whereas the Soviets generally knew little about plans made at German field headquarters.
See also Richard Sorge (https://en.wikipedia.org/wiki/Richard_Sorge)
It is that the data is exfiltrated before crypto is applied. Different thing..but a valid point.
It requires a change in thinking for storage - the 1TB file for secrets is the electronic equivalent of what I used to have to do - tie USB sticks to a piece of 4x2.before couriering them
You basically have 3 choices as far as I can tell:
- encrypt the data at rest so if exfiltrated it is protected for the lifespan of the information's use
- separate your networks t o prevent malware/exfiltration, either through airgaps or through one of the emerging content gateways that split workstations from the internet with more clever 'proxy' behaviour
- split up your critical information so it has to be pieced back together
or 4, if possible assume your data is already compromised and manage the outcome so it is not business destroying when it happens.
Imagine a standard office network, but no people in the office...
... now try your spear fishing campaign, or getting a PC behind the corporate firewall to install that malware.
Technology is a tool - you need to educate people about how to use it.
You're entirely right, but I find a theory that we can educate people into permanent flawless behavior amazingly optimistic. Certainly I am nowhere near clever enough to never make mistakes.
Flawless? No. Better? I would like to think so.
Sure, better is always possible and better is always good.
The problem with security relying on 'better' is that one single error, made one single time, is enough. Security by Perfect Human is a very dangerous illusion.
Of course, none of that is saying that we shouldn't always try to become better, you're 100% right there.
true, thing is, this part of security (a single mole will blow your security) hasn't changed since the ROT13 was "state of the art cryptography"
Security by Perfect Human is a very dangerous illusion
I've been arguing for years that that is exactly what many people in security are relying on, hence the abysmal failure in many corporate setups to keep things secure. I see rules and processes that assume people have somehow magically turned into robots, and the result is far from fault tolerant.
Personally I think it smacks of sheer arrogance by the authors, pretending that they never have a pre-caffeïne moment is not going to fly with me.
Security and IT share one critical problem: it's often forgotten that it starts and ends with human beings.
Anon, because I'm well known for that argument and I don't want to burn my handle just yet :)
"...I find a theory that we can educate people into permanent flawless behavior amazingly optimistic."
Especially as most security precautions fly directly in the face of all accepted norms of good, ethical human behaviour.
Sharing, helping others, openness, efficiency... security blocks or reduces all of these. To be good at security, you need to learn a whole new set of reflexes that are completely antithetical to our notions of being a good social or corporate citizen.
That's why social engineering exploits are so relatively easy - they go with the grain of human nature.
Today the crypto "standards" employed by the majority of the world (DES, AES, etc.) were written by the Yank NSA. No one can convince me that they would do this and not have a back door.
By law, the NSA cannot monitor domestic traffic, so if you send an email from NYC to Chigago, it is routed through Toronto. Do you think this is coincidence?
The scary part are the laws that have not been made public. The post-911 Patriot Act style provisions that are still classified.
DES was originally designed by IBM (as Lucifer) though admittedly weakened by the NSA (from 64 bit to 56 bit), but not in secret. After decades of scrutiny no backdoor has yet been found.
The AES specification is a result of a world-wide public contest. The winning entry, Rijndael, is specified by Joan Daemen and Vincent Rijmen, two Belgian researchers. The AES finalists include entries from all over the wrold. It was organised by NIST, an agency of the U.S. Department of Commerce (and not NSA).
Sorry, but if you think there are backdoors in AES, DES, RSA, DH, EC or SHA then you don't know anything about crypto.
Its just that some algorithms lend themselves to being broken by state organizations with lots of funding.
An example would be using extremely large rainbow tables with very expensive hardware.
How can we be certain that Windows is not purposefully bugged to help the spooks get right in the computer and have access to your machine ? I mean , for a second , can we be wholly certain that there ain't software put right into the OS that opens the whole computer to the secret services ? Even a remote kill switch that can deactivate your hardware and scrap it ? Possibilities when the OS is the tool for the information services render any kind of defense totally ineffective. Think about it .
And before anyone says "I use open source and compile everything myself", see http://catb.org/jargon/html/B/back-door.html
Of course it is rigged!! but it just means some security agencies are able to get to your servers, nothing more.
it's just pointless willy waving to base your faith in code on the fact you compiled it yourself unless you also wrote the compiler.
AND reviewed the CPU architecture you compiled it on, and you intend to run it on.
Seriously, how many people have gone through every possible opcode on a pentium, and checked it does what it says it does.
Including the undocumented ones.
Not to mention the deliberate exploits the NSA put into quantum mechanics itself. You think you know what a transistor *really* does? Think again. They call it "spooky" action at a distance for a reason, you know.
"How can we be certain that Windows is not purposefully bugged to help the spooks get right in the computer and have access to your machine ? "
Because the effort to backdoor the system is far outweighed by the likely move to more secure solutions if your enemy becomes suspicious of this. Simply by knowing too much and reacting accordingly, the US would give the game away. As soon as it is known (or believed) to exist, then all your opponents move to alternative and more secure operating systems, and the US would be worse off than continued use of a regular hackable but not back-doored OS. As a state actor with a reasonable amount of resource it wouldn't be a particular problem to take a secure Linux distro and brew your own even more secure version, so this isn't an idle threat.
Take Iran or Nork. If it were possible to remotely melt all their computers (or take full control, or simply remotely read everything) via a built in OS back door, don't you think they'd have used it by now? And in that event the Iranians/Norks would be able to work out that they'd been compromised, and have to change their approach to IT. And then the spooks suddenly go from knowing almost everything to knowing nothing. Conversely, if the spooks do have a back door, but won't use it now for fear of a greater need in the future, then when will they ever use it?
> it's just pointless willy waving to base your faith in code on the fact you compiled it yourself
No it isn't.
It may not be 100% proof positive that the code cannot be compromised - but it goes a long way down that road. It is *dramatically* better than just accepting that some piece of closed-source code is all it purports to be.
Imagine the actual series of events that needs to be put in place to compromise the compiler in a meaningful way: that compiler needs to detect and attack its target source even though that source is readily changing. Attacking the wrong source will likely pollute the data set and also get you caught. Not attacking the source means you don't get any data. Attacking in the wrong manner will likely fail and possibly get you caught - and you have to deal with the fact that the source in question is likely different from the one you wrote your attack against in the first place.
This is a very tiny likelihood of success. It's not *impossible* for a sufficiently-funded organisation, but it's damned difficult. Add in the fact that gcc compiles itself 3 times during a standard build run, and it's a tiny target. Compare and contrast to threatening a US company with all sorts of nasties if they don't build this backdoor into their code.
"Of course it is rigged!! but it just means some security agencies are able to get to your servers, nothing more"
That's where we part company. IMHO security is like liberty and virginity - you can't lose a little bit of it.
they have these secret invisible black helicopters that are completely silent unlike normal helicopters. If you are out on the street and you listen carefully you can sometimes hear voices coming from high above, but if you look up no-one is there! some people mistake them for ghosts but ghosts are heavier, walk on the ground and if they do talk they rarely say things like "target acquired" and "we've located the disk"
1 with internet, another without it.
The one without internet, should also have its usb ports disabled.
Users should be unable to use usb ports, access yahoo mail, gmail, etc, no zip files, flash or java.
Only signed apps, and those should be checked by a SERVER, should run on a computer.
Better still: thin clients.
Cheaper to run, incredibly more secure.
"I want the secret of the Coca-Cola company not to be kept in a tiny file of 1KB, which can be exfiltrated easily by an APT," Shamir said. "I want that file to be 1TB, which can not be exfiltrated. I want many other ideas to be exploited to prevent an APT from operating efficiently.
MIcrosoft made this years ago by allowing you to embed flash in a .doc file.
If you need a secure setup then keep the data on an external disk or usb stick that is only accessed by a diskless isolated computer that boots from a Linux Live CD. If that data has to be sent to another party then it is encrypted and the encrypted copy put onto a USB stick that is taken to an internet connected computer to be sent. It does not matter if the connected computer is compromised as it never sees unencrypted data. For receiving data, the same procedure is followed in reverse. Malware that is put onto the USB stick by the connected computer does not matter as it would just be ignored by the Linux OS (no autorun on most Live CDs).
(The diskless system must complete booting before the data disk is connected to ensure that there can be no persistent malware to upset the Linux OS.)
If this is done then only physical access, concealed camera or RF sniffing will reveal the data.
(For the diskless computer go to a shop that you have not used before and buy a display model - the chance of it having malware targeted at you and able to get round a Linux OS is effectively zero.)
Some Bios viruses are capable of circumventing this.
A HP bios utility would drop a file in a windows directory that would auto-run and phone home.
Its probably best to boot from a read-only/encrypted volume.
Protecting information from inspection by skilled and determined adversaries is exceedingly difficult . APTs are certainly a concern and unless you are someone able to build a secure system yourself, you should assume that it could be monitored.
Defense is generally more expensive than attack. Arguably, it is not effectively possible to defend against attack by opponents whose resources match your own.
The most able defense against invasions of privacy is legislation that makes ill-gotten information of no use to the attacker. That is a long discussion ...
Work that I have done over the years centers on what I call 'data packaging'. That includes security, redundancy, compression, etc. As you peel back the layers of this onion, you will eventually come to realize that creating secure keys is well nigh an intractable problem.
To have some reasonable hope of security you would have to tape out your own silicon and inspect it microscopically to ensure that it had not been tampered with. You would have to construct and bootstrap your own compilers. You would have to create secure keys with extremely sophisticated random pools and I think that most people with enough knowledge in this area would only certify sophisticated (ie exceedingly randomized) one time pads as being truly secure against cracking.
To protect against side-channel attacks, you would have to hide the working system ...
There is a problem of 'reachability' that I referred to in another comment. To get real security, you have to do a ton of processing so that the only way of obtaining the stream is to perform similar calculations. It becomes something of a race through vector space and the team with the best mathematicians and the most computing power has the advantage.
Shamir's mention of terabyte files is a variation on the above. However, in my opinion it would be too weak except for limited times. Unless you really understand the problem it is easy to underestimate how difficult it is. It looks as though Shamir has a deep enough understanding of the problem to realize that it requires extreme measures. It is easy to see how he would be pessimistic.
To properly secure systems, you need to erect very high walls on all sides. It needs to be impossible to socially engineer the keys to a system. Terabyte files are a start. The system has to be secure against side-channel attacks. It cannot be overly reliant on a single clever mathematical technique. Even the side that owns the information should have to expend significant resources to recover access to the use of keys.
Dumb stuff like the following is necessary:
Every single thing that is stored or moving needs to be strongly encrypted. Only that which must be in the clear should be in the clear. The reason for this is that if you only encrypt things of value you provide attackers with crucial information as to what they should attack. To that end, as systems become more secure, an increasing amount of their traffic and storage should be devoted to decoys. If (10^15-1) out of (10^15) items stored are decoys and most decoys have plausible data at dead ends several levels of encryption deep, it creates a significant barrier for an attacker.
When I say 'dumb', what I mean is that it is pretty obvious, but since nearly all traffic is passing in the clear somehow it is obvious that the message is lost on most.
I expect that we will come to see a much more deeply secure network in the next decade or two. People may not understand the intricacies of security, but they *do* understand in a very visceral way the need to protect their persons and their estates.
Surely the persistence of malware, if anything, should mean encryption is ever MORE important?
As ever, the problem with encryption is key management. But technologies to solve this problem (at least partially) are already widely available in the form of TPM, smart cards and ARM TrustZone. They're just not well integrated into OS platforms.
For TPM naysayers, yes it *could* be abused by industry to restrict access, creativity and openness, but that doesn't mean the technology itself is "evil", anymore than knives are just because they could be used for murder. Ultimately, if used sensibly, hardware protection is an excellent way of mitigating against software attacks.
If Shamir is right I don't know what you can do to make the situation better.
Also I don't understand his coke example. The coke recipe in a probabilities can be accurately described on a letter/A4 page. Making it 1tb (with what if not some kind of encryption tech) is useless if your system is compromised and the apt can retransform it to 1k.
And if In effect, even the most secure locations and most isolated computer systems have been penetrated over the last couple of years by a variety of APTs and other advanced attacks, is there a single instance where no gross human error has been done that could have been avoided. Or are the APTs really so good that it could penetrate 0-error facilities with 0-error staff with ease? That would be pretty scarry.
Yep. Effectively the 1kb=>1Tb Coke example is just a transformation/encryption; the reverse step (decryption) surely isn't going to be performed by humans=>a software program will do it=>same weaknesses as using regular encryption.
(And even if humans do get the offsets to the relevant bytes out of the vault, type them in by hand, then the 1kb file will appear anyway in memory).
Or am I missing something?
Beer because that's what encryption usually leads me to thinking about.
One thing you CAN do is treat your corporate network like the internet. Don't trust any of it. Split it and firewall it.
But in the end data that is visible to humans who need it for their daily grind, is stealable data.
The fact that they happen to steal it using infected PC's is simply another twist on the same problem.
> Also I don't understand his coke example
I hope I don't, either. Because he apperas to be advocating security by obscurity...
Shamir probably hasn't heard of the open source Linux OS or even the license able ARM core.
So no problem for governments or large corporations.
So encryption is a bad thing?
> So encryption is a bad thing?
No. Dressing up a 1KB file as a 1TB file is a bad thing, because the bulk of it will be stripped away without difficulty, leaving only the illusion of any security.
btw the fact that the coke recipe is still secret gives hope that some simple strictly enforced policies are still of enormous value securitywise.
There are many secrets in the world that remain secret for one simple reason. The information is never shared. The problem arises when you want to make information shareable. I question how much of the info that is out there now really needs to be secured.
Everyone likes the idea that they are responsible for keeping secrets safe and that their information has value, but that simply isn't the case. The catch is risk management and not trying to cover every possible scenario for every bit of data. Trying to secure everything makes everything less secure.
Thankfully the process machines at work aren't smart enough to need to be networked.
The PCs, with the company data on, loads and loads of the sort of stuff a company has, are networked. And connected to the internet. And since them-in-charge have USB keys in addition to their laptops, it looks as if they aren't locked down there either. Anybody can Skype, anybody can email and Yahoo, and probably anybody can open attachments too...
Icon because convenience trumps sense. If I was their sysadmin, things would be very different!
> Anybody can Skype, anybody can email and Yahoo
I used to have a customer with *severe* malware infestation problems.
The issue turned out to be one of the directors spending far too much time on one-handed websites on his company machine.
The successful solution in the end was to buy a new machine with a decent monitor on it, then install Fedora on it. This became known as "The Porn Machine", and that was its sole purpose.
Worked a treat...
I have a nice digital camera at the bottom of a toolbox, I uglified it, you would think it was a piece of garbage metal with gucky paper stuck all on it, the chrome was scratched up on the sidewalk. Camera operates perfect, but looks like shit.
Hollowed out books, I know it sounds funny, but DIY in this field is hard core stealth. (now just watch out for dust patterns, or being seen physically), Hollowed out wood, hollowed out cement, structure with hidden rooms, hollowed out rocks, hollowed out trees, hallowed out car, bike, travel-pod, gas tank, a removable panel on a jet, etc.
Homing pidgins, with coded messages on their little feet, isn't there an RFC?
Old WWI WWII code like that nonsense PBMAB IATGD WTFTYSL HALFXWAD == please bring me a beer I am thirsty god damnit what the fuck takes you so long (rest of the code "halfxwad" is stripped) Seems there was some article asking for crackers, and finally an old vet from the period answered the call. Well if you got people that use certain language, it's not hard to make a new language up. - G2H (got guns to hide)
aes16K (whatever happened to CKT?)
Home brew code.
Line of sight communications. Lasers, Old School Audio - 9' Wood Dishes (like satellite, point at each other lined up from mountain top to mountain top) , Flashlight Light Signaling,
RF. low power fm on weird freqs with modulation tweaks+spread spectrum which doesn't follow "FCC approved" patterns. Split Band / Channel TX/RX + encrypt + time sensitive data making decryption worthless.
Like Filesize, Time is a weapon.
Rat wired proprietary hardware with all the part numbers sanded off and new fake ones put on. a 4048 with an 8087 tag on it! Those ttl chips are cmos, the TI chips are analog devices, Panasonic for Motorola etc. Oh what fun!
Destructive Protection. a drillpress wired to drill through a hard drive on logic gated power to an SCR. electronic cart starting of devices, places. Flooding with water.
In your face Protection.
Classified ads, Printing and storing among a zillion tons of stuff
9' Wood Dishes == 9 foot wooden salad dish networks
Quantum processing hasn't scaled up, and doubt it will because I think it is based on a mystic premise, so I don't regard this as a threat or a solution in the Cryptography arms race. APT only works if it can get access to the unencrypted data or Cryptographic keys, so keep key stores and raw data processing at a higher privilege levels than APTs can't get to.
All my public network servers are run on restricted user accounts in isolated VMs, with monitoring security software, because I expect them to be attacked and they are, but it does the attackers no good :) The default of running all services at high privilege levels is stupid; it is asking for trouble!
fscked by SHA-1 collision? Not so fast, says Linus Torvalds