Well, like, duh!
Why would you assume anything else?
Security experts testifying at hearings held by the US Senate Armed Services Committee on cybersecurity have warned that maintaining a perimeter to keep out spies is unsupportable, and that the US should assume that its networks have already been fully penetrated. "We've got the wrong mental model here," said Dr. James Peery, …
Why would you assume anything else?
What a surprise. It goes without saying, that when you think you're data is safe and untouchable you are at risk the most.
Complacency and arrogant self belief are the flaw here.
Doesn't this go back to the old saying (paraphrased as I'm beer hazed) "The price of freedom is eternal ___VIGILANCE___" (I bold that, vigilance, not restricition, law or invasion of self - though to some minds, the same thing) TBH there are few places better than the information arena to highlight that necessity, all the security in the world wont stop someone letting in a guy in a wheel chair who steals a pen from reception after waving his ipad in a secretaries face to roll up to a secure door and wedge the lid in the hindge whilst pretending to look for the toilet only to slip in after the first person to open the door and find himself in your secure records filling his bags with all your loot.
Whenever I'm in a "secure" data centre I'm always wondering how if I were a bad guy I could get what I wanted, and beyond all the clever solutions, the best way is always find someone that works there and threaten to hurt their familiy, and that's the kind of thing that's very hard to protect against.
The solution is defence in depth.
Perimeter protection is a good start, but then layer security around anything that needs protecting. Like the layers of an onion.
If you've a server that holds secret stuff, have a good role- or claims-based model that restricts access to those that need it. Use an application firewall to do URL whitelisting and filtering, packet inspection (including SSL breaks), and SSO (2-factor to kerberos and the like), if needed. Make everyone (both internal and external users) go through that app firewall. Log EVERYTHING. Take it down to the network level (segmentation, TLS, etc and so on and so forth).
Like the man says. Assume that all clients have been compromised. Rebuild clean servers.
It's not like defence in depth is new or anything.
Aside from the concept of defence in depth - itself a military one employed in WWII and the cold war - one simple category of defence has been overlooked; isolate military nets from public nets and eliminate sneaker net. The first part is an obvious one, one that Boeing and other corporations have also failed miserably to take on board. The second one can be eliminated by hard work and common sense, and it is so obvious. When cameras and what have you can carry bootable cards (yes, one of my cameras carried a rescue system for a while, just for kicks; it could connect to a USB port and Bob's your Uncle [ http://subgenius.com/ ]) it should be no surprise that people have made their various forms of USB device bootable.
Back in the days of floppy discs a consultant, in a psychiatric unit where I was responsible for IT and research, continually brought infected discs in. Eventually I caused a storm by physically disabling floppy drives, because other techniques failed. Disabling all USB, CD/DVD/floppy access is vital in military systems, as much as access to the internet is concerned. No internet machine should be in the same secure room as a secure machine, and so on. Somehow the digital age seems to have ushered in a lack of security that I find alien, having worked in very secure NATO offices for quite some time. Basic mistakes in vetting, control of information access and information security seem to be par for the course rather than exceptional, this is our undoing.
*Dr. Michael Wertheimer warned that the US is also facing an increasing intelligence gap, as not enough citizens have the skills of online defense. In 2010 there were just 726 computer science PhDs awarded to US citizens, and only 64 of them signed up for government service."
Phew. That means only 64 are faffing around in jobs of no great relevance to the advancement of security. Lucky.
As Gary McGraw said in an interview in IEEE Security & Privacy: "Ultimately I believe that the government is way behind when it comes to cybersecurity. But I also think that the Obama administration has made important progress since the days of the at-first-classified CNCI. They might even have caught up to 1996! Only 16 years to go."
Stupidity on a grand scale. The US used to be the innovator, the world leader in most areas of technology. Now we lag behind because pretty much every area within IT has been off-shored to Asia. If you like the idea that we will eventually rely on countries like China for our technological expertise, that all the people working on systems at all levels are doing so from another country, cool, you go with that. After all what could go wrong with entrusting our financial, scientific and military systems to whatever country bids lowest. I don't care about immigrants working within our labor pool, I'm a strong believer in immigration, especially highly qualified people who help bring us advances in medicine and science. What I care about is ALL our jobs are being shifted outside of this country, where US Laws have absolutely no say in what goes on and who has access to what. Not to mention a rapidly growing trend of replacing high paying jobs with tempory contracts that have low pay and no benefits.
DAY GONE TOOK R JAWBS!
Asia is still only America's Factory. They do not yet develop anything the US and Europe don't.
The basic problem is that a PhD could mean a lot or nothing, it really depends on the subject but when we have shipped most of the industrial base to Asia we definitely have lost our future by simply getting rid of our manufacturing culture, the knowledge that comes from long years of experience, the essential synergies between design, mfring etc. And no, before someone brings it up: Apple has little to prove otherwise - it's one thing that you can design mp3 players or a tablet down to the last bit (making it exclusive by buying up firm like they did for the CPU etc) but when your margins solely come from using Chinese sweatshops then, well, you are only hoarding money, not industrial culture, sorry and I don't care if your designer a Knight or not...
And yes, I agree, temp jobs are a death sentence to anyone except Wall st who are pushing for ever-lower costs with ever-higher profits...
...TL,DR: it's once again Wall St and the financial parasites behind everything, they are slowly killing everything.
DAY GONE TOOK R JAWBS!
No they didn't, some bean counter gave them away to get a bigger bonus.
"I think we have to go to a model where we assume that the adversary is in our networks. It's on our machines, and we've got to operate anyway."
That feels intuitively like sound common sense. However, the devil (as always) is in the detail. Are they talking about systems which allow them to track the "enemy's" presence within the system and thereby control what he knows (or ensuring that what he gets access to is not quite as useful/accurate as he thinks it is) or are they talking "hardened" areas within the system which they believe they can succeed in keeping him out of? Or are they thinking of a combination of these types of strategy? Anybody got any suggestions?
The first clue is the term "Cyber" used by so-called security experts. In my experience, using that term in this context is proof of cluelessness.
See: mine from three years ago
Following that, do read the included link, as it is a rather important paper when it comes to this entire concept ...
Just because your OS of choice has been seen by a 'BUNCH OF PEOPLE' does not make it more secure or any less likely not to have back doors in it. ( I read both your links)
All code has been seen by a bunch of people. Open or closed, code will have vulnerabilities. There does not have to be an explicit 'backdoor'. Openness does not preclude that.
There is no moral high ground in using any OS. There's plenty going on you don't know.
"Just because your OS of choice has been seen by a 'BUNCH OF PEOPLE' does not make it more secure or any less likely not to have back doors in it. ( I read both your links)"
Actually, in principle, it does.
One of the key techniques developed and used by the people who built the Space Shuttle software was *exactly* that.
Multiple *eyeballs* on the same piece of code.
Likewise putting "If userID = john-q-hacker copy(unencrypted_password_file, local_output) in the source would also be pretty obvious.
Relying on the fact the software *is* closed source is just another version of security-by-obscurity.
IT security is one area where *transparency* is the best policy. The *odds* are that the white hats outnumber the black hats and will find more bugs faster. SBO did not work for GSM, the Charlie Card, the Ti Keylock remote car and garage door opener chips or a bunch of other systems.
The (open source) DES standard stayed secure for *decades* and people where able to recognize *when* it was starting to become insecure because they knew its computational complexity, like RSA key lengths.
My point, AC old chap, is that the so-called "security experts" babbling at Congress have absolutely zero concept when it comes to the state of the tool-chain that built their OS. Any OS, not just my particular OS of the day. And any branch of the "expert" tree, for that matter, pro or con "the State is secure".
I'm not religious about it, I'm an equal opportunity haranguer.
well obviously the most secure method is to build their own OS, and their own tool set, and their own applications, but back in the real world that's just pony talk.
Imagine some new weapon; tell Congress / Kremlin that the other side has it and you need to make a better one; Profit!
The system that is pwned is not the real system, the real one is air gapped. All the 'foreign powers' have access to is the stuff 'they' let them have access too... Counter Intelligence
Anyway, all the real data is transmitted by carrier pigeon.
Really? Where does its data come from? Keyboards? Where does its output go? Green and white listing paper? Come to think of it, how does it get its Windows Updates and its new AV definitions?
Or maybe it gets its input directly from a long distance radar system and its output goes to a anti-missile missile launch control system. There's still connectivity there, with no air gap.
It may be "air gapped" from the wider Internet, but it's 2012 - really, what are the chances it has no exploitable connectivity at all?
I think it's hard to be clear when your tongue is firmly in your cheek. That was my take from Nergatron
And that's a little move they used to call "Strategic Deception"
At one time the breakdown in relations between the USSR and China was thought to be one of these by *some* sections of the Intelligence community.
Trouble is, it's hard to be ironic when half the commentards in your audience are happy to appear clueless in public. E.g. the Stuxnet-related comments on here a while back, saying "why is this connected to the Interwebs" (when it wasn't).
Actually, AC 12:42, Stuxnet was in fact connected to TehIntraWebTubes. Via a protocol known as "sneakernet". HTH, HAND.
Tell that to my roommate who was changing her 28 passwords every three days for two months while the military tried to track down the virus hitting their systems (it eventually turned out to be some variant of USB infector).
There's a flap for that.
Well done sir!
Pigeons have a whole different set of vulnerabilities. Cats, 12 bore shotguns, hawks. "Pigeon packet lost error. Hit by car. Abort retry ignore?"
Been reading RFC 1149, have we?
*If* the Wikileaks suspect Bradley Manning (IIRC a Marine PFC, not even an *officer* FFS) did copy all that diplomatic telegram traffic and walk it out on a Lada GaGa CD .
a) Why did he have permission to access this data?
b) Why does the Pentagon have access to State dept traffic in the *first* place?
And BTW how long ago was a Cybersecurity CEO appointed by DHS to oversee *all* IT security in the federal govt?
My gut feeling. There's *lots* of basic (but dull and detailed) work that can be done to tighten up security *everywhere* but it takes effort by *knowledgeable* people with support at a *senior* level to get change.
But then I suppose that's true of *every* major organization everywhere.
On the upside. It *should* be impossible to offshore this work to some Indian/Chinese/<Next great cheap labor hole>, right?
If anybody had inspected the logs periodically Mr Manning would have been caught very early. The problem is all these "Executive" f$ckers who can't be bothered to set up processes which do create serious security.
All they can do is to wring hands and fork over billions every year to Lockheed Martin, Raytheon, ITT and the rest of this mafia. They are still in search of the Silver Bullet when the solution lies in well-paid, non-overtimed and respected system administrators who have time to look at logs. And have time to write perl scripts for log analysis, as opposed to swearing at the latest incarnation of crap from Locktit Martin.
"*If* the Wikileaks suspect Bradley Manning (IIRC a Marine PFC, not even an *officer* FFS) did copy all that diplomatic telegram traffic and walk it out on a Lada GaGa CD..."
Sorry, but I don't see any downside to that. If it wasn't for Bradley Manning, we never would've known what kind of skeezy, reprehensible shit the USA was/is up to in Iraq and Afghanistan. You go, Bradley Manning.
"Sorry, but I don't see any downside to that. If it wasn't for Bradley Manning, we never would've known what kind of skeezy, reprehensible shit the USA was/is up to in Iraq and Afghanistan. You go, Bradley Manning."
Me either. I've no problem with the act or its results. It's just my sense of professionalism that was *really* p***ed off. Someone (presumably *several* someones) are *paid* to stop this happening and it's pretty clear they did a *very* poor job.
I'm not going for they "They were clearly asking to be robbed" defense but how is their incompetence *not* liable for some kind of disciplinary hearing, up to a Courts Martial?
The policy is much more sound than trying to make things "more secure", for example watching your system more closely so you know what's supposed to go where in terms of traffic, and paying attention to file access times and patterns would be far more effective in terms of damage limitation than trying to keep "something" out.
It also conicendentally means paying a lot more people to sit there and just learn the system's behaviour on a normal day.
While the US government still has computer systems where you have to print out the evidence faster than the bad guy can delete it from the system, it's *doomed*.
What do you mean, 'that wasn't a documentary'?
"and only 64 of them signed up for government service" - ha! upgrade to 512, what's the problem...
64 PhD's should be enough for anybody
Re: -> Re:
"should be enough for anybody" -
not for some congressmen's appetite, mate.
i'll get some dine
in chinatown (-:
that's nearly where they are nearly ready to feed.
Bottom line when it comes to disaster recovery ( and this *is* a branch of that area) how *far* down the chain does your trust go.
Secure copy in offsite safe?
Hand assemble from *source* code?
Hand enter on front panel switches (I know but there are still *some* systems where that is *possible*).
Key parts of Charles Stross's novels Accelerado and Glasshouse hinge around what happens when critical technology (which is also a *monoculture*) are totally compromised.
Couldn't happen IRL? Intel are keen to put AV in their chips, but just *suppose* some malware gets in the chip and locks out *any* attempt to erase it? Once you start putting *erasable* ROM on the processor chip you open the possible of something have first sight of *anything* running through the chip.
Movie plots. Electrical engineers have tools which allow for inspecting chips that no means of software can manipulate. Think of electron beam probes.
Also, there is a large diversity of compilers "out there" and some of them are actually verified for correctness. My assessment is that the biggest threat is in the "COTS" bloat which everybody wants to use even for text messages. Certainly Android must be made look as if it were "Top Secret"-capable, right ????
There were, once upon a time, commercially available and supported operating systems and tools supporting things like mandatory (non-discretionary) access controls, compartmented mode workstations, and so on, known to meet standards defined by the US government and others. Things like Solaris Trusted Extensions and DIGITAL UNIX's MLS+, Trusted X, etc. I believe MLS+ is long gone, don't know about the Sun stuff.
But what, if anything, would be their current equivalent, and where would a verifiably correct compiler fit into this picture?
SELinux policies and labels and such may be part of the answer. Or may not.
Anyway, the long and the short of it is that the technology to manage this kind of thing securely has existed for ages, but the PHBs won't stand for the budget needed when the techies come up with the prices and timescales needed to Do It Right, especially after the management at the prime contractors add their overheads. Plus the usual IT suppliers get upset when Windows is ruled out.
So when they inevitably do the job the cheap/wrong way, the likes of Bradley Manning (and others less public) eventually have a field day, lots of hand-wringing goes on, but Cheap will doubtless continue to trump Right, even where it shouldn't.
Here is an example:
There has even been done work by INRIA, Uni Dresden and others to completely verify compiler and RT kernel correctness. The above example is not a proof, but at least a formalized and strictly executed test set.
And yes, C/C++ do not help when it comes to correctness. That Euro-Socialist invention PASCAL and its modern version Ada is much better suited to create bug-free programs.
One more example: http://www.google.com/url?sa=t&rct=j&q=completely%20verified%20compiler&source=web&cd=11&ved=0CCUQFjAAOAo&url=http%3A%2F%2Fgallium.inria.fr%2F~xleroy%2Fpubli%2Fcompcert-backend.pdf&ei=J1RvT5i-PI7otQaaqrSrAg&usg=AFQjCNEqDZEkloQauDCUZKQzZqhZ_H_5vw&cad=rja
Ada might be a suitable language for designing in, given that it does incorporate concepts such as tasking.
When you get to the compiler, and in the world of safety critical systems, there's a *huge* difference between a compiler "verified for correctness" and "passing the ACV suite".
The 1989 ACV document you link to relates specifically to passing the ACV suite rather than being "verified for correctness" (extract at ).
The modern equivalent of that 1989 process is the Ada Compiler Validation Capability, but passing ACVC is still not proof that the compiler cannot generate incorrect code from valid input.
Check the gcc buglist for "incorrect code generated" bugs. Several hundred. GNAT Ada is gcc.
Now remind readers which Ada compiler has been "verified for correctness".
When an Ada compiler is the answer, it often means someone is asking the wrong question.
 "Testing was carried out for the following purposes: to attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; to attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; to determine that the implementation-dependent behaviour is allowed by the Ada Standard".
A marginally more recent ACV report on XD Ada (DEC Ada front end, SD Scicon 68K back end) can be found at http://www.dtic.mil/dtic/tr/fulltext/u2/a260614.pdf and in Section 1.3 you'll find a reasonable description of what the tests are for.
"Electronic Pearl Harbor II"?
There. Sorry, couldn't resist.
In theory what they say is true. In reality, a large-scale subversion can easily be detected at network choke points such as firewalls, file servers and proxies.
Just have processes in place which will monitor these choke points for unusual activity. For example, if one PC suddenly starts to upload lots of data onto gmail, call the user who owns that machine and ask for an explanation. Don't allow opaque SSL connections inside and out of your networks. Look into every kind of traffic on a periodic basis.
Certainly that will cost serious money, manpower and most of all well-trained and well-educated personnel. The Magic Bullet from your lovely defence contractor won't do that, but the right people with the right amount of authority and workload (read: no overtime, enough time for investigations) will assure a very high security level. Experienced sysadmins know very well how they should police their network, it is just the "leadership" element who never give them appropriate time and authority to investigate and set up proper rules. In military terms, the NCOs know what to do but the officer corps does not want to listen or heed the advice of the NCOs.
Due to Leadership Ignorance, many excellent security techniques are not implemented in most organizations. Managers would have to enter a dialogue with system administrators, developers and users on the subject of Acceptable Security Techniques and Processes.
For example, excessively locking down a network will generate huge costs and provoke people to circumvent the rules by means of USB stick+sneakernet. But having and "Intranet" the size of the Malta Internet is neither necessary nor secure. So all the relevant people should develop a reasonable compartmentalisation plan in large organisations.
Similar things could be said about sandboxing. But the inevitable "executive" reply will be that "we cannot do anything which would change the way people do their work". What they really want is to continue their ignorant hibernation, but they can't say that.