Well, like, duh!
Why would you assume anything else?
GJC
Security experts testifying at hearings held by the US Senate Armed Services Committee on cybersecurity have warned that maintaining a perimeter to keep out spies is unsupportable, and that the US should assume that its networks have already been fully penetrated. "We've got the wrong mental model here," said Dr. James Peery, …
Doesn't this go back to the old saying (paraphrased as I'm beer hazed) "The price of freedom is eternal ___VIGILANCE___" (I bold that, vigilance, not restricition, law or invasion of self - though to some minds, the same thing) TBH there are few places better than the information arena to highlight that necessity, all the security in the world wont stop someone letting in a guy in a wheel chair who steals a pen from reception after waving his ipad in a secretaries face to roll up to a secure door and wedge the lid in the hindge whilst pretending to look for the toilet only to slip in after the first person to open the door and find himself in your secure records filling his bags with all your loot.
Whenever I'm in a "secure" data centre I'm always wondering how if I were a bad guy I could get what I wanted, and beyond all the clever solutions, the best way is always find someone that works there and threaten to hurt their familiy, and that's the kind of thing that's very hard to protect against.
The solution is defence in depth.
Perimeter protection is a good start, but then layer security around anything that needs protecting. Like the layers of an onion.
If you've a server that holds secret stuff, have a good role- or claims-based model that restricts access to those that need it. Use an application firewall to do URL whitelisting and filtering, packet inspection (including SSL breaks), and SSO (2-factor to kerberos and the like), if needed. Make everyone (both internal and external users) go through that app firewall. Log EVERYTHING. Take it down to the network level (segmentation, TLS, etc and so on and so forth).
Like the man says. Assume that all clients have been compromised. Rebuild clean servers.
It's not like defence in depth is new or anything.
Aside from the concept of defence in depth - itself a military one employed in WWII and the cold war - one simple category of defence has been overlooked; isolate military nets from public nets and eliminate sneaker net. The first part is an obvious one, one that Boeing and other corporations have also failed miserably to take on board. The second one can be eliminated by hard work and common sense, and it is so obvious. When cameras and what have you can carry bootable cards (yes, one of my cameras carried a rescue system for a while, just for kicks; it could connect to a USB port and Bob's your Uncle [ http://subgenius.com/ ]) it should be no surprise that people have made their various forms of USB device bootable.
Back in the days of floppy discs a consultant, in a psychiatric unit where I was responsible for IT and research, continually brought infected discs in. Eventually I caused a storm by physically disabling floppy drives, because other techniques failed. Disabling all USB, CD/DVD/floppy access is vital in military systems, as much as access to the internet is concerned. No internet machine should be in the same secure room as a secure machine, and so on. Somehow the digital age seems to have ushered in a lack of security that I find alien, having worked in very secure NATO offices for quite some time. Basic mistakes in vetting, control of information access and information security seem to be par for the course rather than exceptional, this is our undoing.
*Dr. Michael Wertheimer warned that the US is also facing an increasing intelligence gap, as not enough citizens have the skills of online defense. In 2010 there were just 726 computer science PhDs awarded to US citizens, and only 64 of them signed up for government service."
Phew. That means only 64 are faffing around in jobs of no great relevance to the advancement of security. Lucky.
As Gary McGraw said in an interview in IEEE Security & Privacy: "Ultimately I believe that the government is way behind when it comes to cybersecurity. But I also think that the Obama administration has made important progress since the days of the at-first-classified CNCI. They might even have caught up to 1996! Only 16 years to go."
Stupidity on a grand scale. The US used to be the innovator, the world leader in most areas of technology. Now we lag behind because pretty much every area within IT has been off-shored to Asia. If you like the idea that we will eventually rely on countries like China for our technological expertise, that all the people working on systems at all levels are doing so from another country, cool, you go with that. After all what could go wrong with entrusting our financial, scientific and military systems to whatever country bids lowest. I don't care about immigrants working within our labor pool, I'm a strong believer in immigration, especially highly qualified people who help bring us advances in medicine and science. What I care about is ALL our jobs are being shifted outside of this country, where US Laws have absolutely no say in what goes on and who has access to what. Not to mention a rapidly growing trend of replacing high paying jobs with tempory contracts that have low pay and no benefits.
The basic problem is that a PhD could mean a lot or nothing, it really depends on the subject but when we have shipped most of the industrial base to Asia we definitely have lost our future by simply getting rid of our manufacturing culture, the knowledge that comes from long years of experience, the essential synergies between design, mfring etc. And no, before someone brings it up: Apple has little to prove otherwise - it's one thing that you can design mp3 players or a tablet down to the last bit (making it exclusive by buying up firm like they did for the CPU etc) but when your margins solely come from using Chinese sweatshops then, well, you are only hoarding money, not industrial culture, sorry and I don't care if your designer a Knight or not...
And yes, I agree, temp jobs are a death sentence to anyone except Wall st who are pushing for ever-lower costs with ever-higher profits...
...TL,DR: it's once again Wall St and the financial parasites behind everything, they are slowly killing everything.
"I think we have to go to a model where we assume that the adversary is in our networks. It's on our machines, and we've got to operate anyway."
That feels intuitively like sound common sense. However, the devil (as always) is in the detail. Are they talking about systems which allow them to track the "enemy's" presence within the system and thereby control what he knows (or ensuring that what he gets access to is not quite as useful/accurate as he thinks it is) or are they talking "hardened" areas within the system which they believe they can succeed in keeping him out of? Or are they thinking of a combination of these types of strategy? Anybody got any suggestions?
The first clue is the term "Cyber" used by so-called security experts. In my experience, using that term in this context is proof of cluelessness.
See: mine from three years ago
Following that, do read the included link, as it is a rather important paper when it comes to this entire concept ...
Just because your OS of choice has been seen by a 'BUNCH OF PEOPLE' does not make it more secure or any less likely not to have back doors in it. ( I read both your links)
THINK MAN!
All code has been seen by a bunch of people. Open or closed, code will have vulnerabilities. There does not have to be an explicit 'backdoor'. Openness does not preclude that.
There is no moral high ground in using any OS. There's plenty going on you don't know.
"Just because your OS of choice has been seen by a 'BUNCH OF PEOPLE' does not make it more secure or any less likely not to have back doors in it. ( I read both your links)"
Actually, in principle, it does.
One of the key techniques developed and used by the people who built the Space Shuttle software was *exactly* that.
Multiple *eyeballs* on the same piece of code.
Likewise putting "If userID = john-q-hacker copy(unencrypted_password_file, local_output) in the source would also be pretty obvious.
Relying on the fact the software *is* closed source is just another version of security-by-obscurity.
IT security is one area where *transparency* is the best policy. The *odds* are that the white hats outnumber the black hats and will find more bugs faster. SBO did not work for GSM, the Charlie Card, the Ti Keylock remote car and garage door opener chips or a bunch of other systems.
The (open source) DES standard stayed secure for *decades* and people where able to recognize *when* it was starting to become insecure because they knew its computational complexity, like RSA key lengths.
My point, AC old chap, is that the so-called "security experts" babbling at Congress have absolutely zero concept when it comes to the state of the tool-chain that built their OS. Any OS, not just my particular OS of the day. And any branch of the "expert" tree, for that matter, pro or con "the State is secure".
I'm not religious about it, I'm an equal opportunity haranguer.
Really? Where does its data come from? Keyboards? Where does its output go? Green and white listing paper? Come to think of it, how does it get its Windows Updates and its new AV definitions?
Or maybe it gets its input directly from a long distance radar system and its output goes to a anti-missile missile launch control system. There's still connectivity there, with no air gap.
It may be "air gapped" from the wider Internet, but it's 2012 - really, what are the chances it has no exploitable connectivity at all?
Trouble is, it's hard to be ironic when half the commentards in your audience are happy to appear clueless in public. E.g. the Stuxnet-related comments on here a while back, saying "why is this connected to the Interwebs" (when it wasn't).
*If* the Wikileaks suspect Bradley Manning (IIRC a Marine PFC, not even an *officer* FFS) did copy all that diplomatic telegram traffic and walk it out on a Lada GaGa CD .
a) Why did he have permission to access this data?
b) Why does the Pentagon have access to State dept traffic in the *first* place?
And BTW how long ago was a Cybersecurity CEO appointed by DHS to oversee *all* IT security in the federal govt?
My gut feeling. There's *lots* of basic (but dull and detailed) work that can be done to tighten up security *everywhere* but it takes effort by *knowledgeable* people with support at a *senior* level to get change.
But then I suppose that's true of *every* major organization everywhere.
On the upside. It *should* be impossible to offshore this work to some Indian/Chinese/<Next great cheap labor hole>, right?
If anybody had inspected the logs periodically Mr Manning would have been caught very early. The problem is all these "Executive" f$ckers who can't be bothered to set up processes which do create serious security.
All they can do is to wring hands and fork over billions every year to Lockheed Martin, Raytheon, ITT and the rest of this mafia. They are still in search of the Silver Bullet when the solution lies in well-paid, non-overtimed and respected system administrators who have time to look at logs. And have time to write perl scripts for log analysis, as opposed to swearing at the latest incarnation of crap from Locktit Martin.
"*If* the Wikileaks suspect Bradley Manning (IIRC a Marine PFC, not even an *officer* FFS) did copy all that diplomatic telegram traffic and walk it out on a Lada GaGa CD..."
Sorry, but I don't see any downside to that. If it wasn't for Bradley Manning, we never would've known what kind of skeezy, reprehensible shit the USA was/is up to in Iraq and Afghanistan. You go, Bradley Manning.
"Sorry, but I don't see any downside to that. If it wasn't for Bradley Manning, we never would've known what kind of skeezy, reprehensible shit the USA was/is up to in Iraq and Afghanistan. You go, Bradley Manning."
Me either. I've no problem with the act or its results. It's just my sense of professionalism that was *really* p***ed off. Someone (presumably *several* someones) are *paid* to stop this happening and it's pretty clear they did a *very* poor job.
I'm not going for they "They were clearly asking to be robbed" defense but how is their incompetence *not* liable for some kind of disciplinary hearing, up to a Courts Martial?
The policy is much more sound than trying to make things "more secure", for example watching your system more closely so you know what's supposed to go where in terms of traffic, and paying attention to file access times and patterns would be far more effective in terms of damage limitation than trying to keep "something" out.
It also conicendentally means paying a lot more people to sit there and just learn the system's behaviour on a normal day.
This post has been deleted by its author
Bottom line when it comes to disaster recovery ( and this *is* a branch of that area) how *far* down the chain does your trust go.
Secure copy in offsite safe?
Hand assemble from *source* code?
Hand enter on front panel switches (I know but there are still *some* systems where that is *possible*).
Key parts of Charles Stross's novels Accelerado and Glasshouse hinge around what happens when critical technology (which is also a *monoculture*) are totally compromised.
Couldn't happen IRL? Intel are keen to put AV in their chips, but just *suppose* some malware gets in the chip and locks out *any* attempt to erase it? Once you start putting *erasable* ROM on the processor chip you open the possible of something have first sight of *anything* running through the chip.
Forever.
Movie plots. Electrical engineers have tools which allow for inspecting chips that no means of software can manipulate. Think of electron beam probes.
Also, there is a large diversity of compilers "out there" and some of them are actually verified for correctness. My assessment is that the biggest threat is in the "COTS" bloat which everybody wants to use even for text messages. Certainly Android must be made look as if it were "Top Secret"-capable, right ????
Citation needed?
There were, once upon a time, commercially available and supported operating systems and tools supporting things like mandatory (non-discretionary) access controls, compartmented mode workstations, and so on, known to meet standards defined by the US government and others. Things like Solaris Trusted Extensions and DIGITAL UNIX's MLS+, Trusted X, etc. I believe MLS+ is long gone, don't know about the Sun stuff.
But what, if anything, would be their current equivalent, and where would a verifiably correct compiler fit into this picture?
SELinux policies and labels and such may be part of the answer. Or may not.
Anyway, the long and the short of it is that the technology to manage this kind of thing securely has existed for ages, but the PHBs won't stand for the budget needed when the techies come up with the prices and timescales needed to Do It Right, especially after the management at the prime contractors add their overheads. Plus the usual IT suppliers get upset when Windows is ruled out.
So when they inevitably do the job the cheap/wrong way, the likes of Bradley Manning (and others less public) eventually have a field day, lots of hand-wringing goes on, but Cheap will doubtless continue to trump Right, even where it shouldn't.
Here is an example:
http://www.google.com/url?sa=t&rct=j&q=ada%20compiler%20validation&source=web&cd=6&ved=0CFwQFjAF&url=http%3A%2F%2Fwww.dtic.mil%2Fdtic%2Ftr%2Ffulltext%2Fu2%2Fa215201.pdf&ei=eVFvT9XmMsfHtAbmt5CRAg&usg=AFQjCNFGlFqqEfy0hvCMRKKX5RKIiJuncw&cad=rja
There has even been done work by INRIA, Uni Dresden and others to completely verify compiler and RT kernel correctness. The above example is not a proof, but at least a formalized and strictly executed test set.
And yes, C/C++ do not help when it comes to correctness. That Euro-Socialist invention PASCAL and its modern version Ada is much better suited to create bug-free programs.
One more example: http://www.google.com/url?sa=t&rct=j&q=completely%20verified%20compiler&source=web&cd=11&ved=0CCUQFjAAOAo&url=http%3A%2F%2Fgallium.inria.fr%2F~xleroy%2Fpubli%2Fcompcert-backend.pdf&ei=J1RvT5i-PI7otQaaqrSrAg&usg=AFQjCNEqDZEkloQauDCUZKQzZqhZ_H_5vw&cad=rja
This post has been deleted by its author
Ada might be a suitable language for designing in, given that it does incorporate concepts such as tasking.
When you get to the compiler, and in the world of safety critical systems, there's a *huge* difference between a compiler "verified for correctness" and "passing the ACV suite".
The 1989 ACV document you link to relates specifically to passing the ACV suite rather than being "verified for correctness" (extract at [1]).
The modern equivalent of that 1989 process is the Ada Compiler Validation Capability, but passing ACVC is still not proof that the compiler cannot generate incorrect code from valid input.
Check the gcc buglist for "incorrect code generated" bugs. Several hundred. GNAT Ada is gcc.
Now remind readers which Ada compiler has been "verified for correctness".
When an Ada compiler is the answer, it often means someone is asking the wrong question.
[1] "Testing was carried out for the following purposes: to attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; to attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; to determine that the implementation-dependent behaviour is allowed by the Ada Standard".
A marginally more recent ACV report on XD Ada (DEC Ada front end, SD Scicon 68K back end) can be found at http://www.dtic.mil/dtic/tr/fulltext/u2/a260614.pdf and in Section 1.3 you'll find a reasonable description of what the tests are for.
In theory what they say is true. In reality, a large-scale subversion can easily be detected at network choke points such as firewalls, file servers and proxies.
Just have processes in place which will monitor these choke points for unusual activity. For example, if one PC suddenly starts to upload lots of data onto gmail, call the user who owns that machine and ask for an explanation. Don't allow opaque SSL connections inside and out of your networks. Look into every kind of traffic on a periodic basis.
Certainly that will cost serious money, manpower and most of all well-trained and well-educated personnel. The Magic Bullet from your lovely defence contractor won't do that, but the right people with the right amount of authority and workload (read: no overtime, enough time for investigations) will assure a very high security level. Experienced sysadmins know very well how they should police their network, it is just the "leadership" element who never give them appropriate time and authority to investigate and set up proper rules. In military terms, the NCOs know what to do but the officer corps does not want to listen or heed the advice of the NCOs.
Due to Leadership Ignorance, many excellent security techniques are not implemented in most organizations. Managers would have to enter a dialogue with system administrators, developers and users on the subject of Acceptable Security Techniques and Processes.
For example, excessively locking down a network will generate huge costs and provoke people to circumvent the rules by means of USB stick+sneakernet. But having and "Intranet" the size of the Malta Internet is neither necessary nor secure. So all the relevant people should develop a reasonable compartmentalisation plan in large organisations.
Similar things could be said about sandboxing. But the inevitable "executive" reply will be that "we cannot do anything which would change the way people do their work". What they really want is to continue their ignorant hibernation, but they can't say that.
64 PhDs.....means absolutely nothing. The problem isn't the number of PhDs that are turned out. The problem is the number of competent IT people that these schools produce; which is pretty close to maybe 2% of those that exit with an IT related degree of any kind. And that 2% usually learned the truth of things on their own.
The education system, especially with regards to computer science, is an absolute joke. They don't know ANYTHING about security. They barely know how to turn on a friggin computer.
If you really want to fix this then the first thing to do is take a long hard look at how high school and college level courses teach about computers. They are either decades behind or, when trying to be current, teaching high level flavor of the month programming courses.
We use a fair number of interns and recent grads. The very first thing we do when they show up is let them know that everything they learned in school is wrong. Then we reeducate them on what security really means. And guess what: our focus has been inside the "border" for a long time.
True.
If the number of Computer Science PhDs being turned out this year is seen to be a significant concern, then they are missing the point. How many thousands of people are there out there without PhDs but with the skills and experience to help with the problem? It doesn't need a PhD in CS to provide a significant contribution to the 'war' effort.
And I would ask : how many PhDs do the global community of miscreants have?
People are starting to use the phrase "perimeter security" as though it were short-hand for a bad thing. It isn't. It is always a good idea to fend off the unsophisticated attacks at the border, if only as an exercise in noise reduction for whatever measures you have in place within.
The bad thing is to have nothing within. PS is necessary, but not sufficient.
Absolutely - securing network borders is as important as doing border controls in the real world. After all, a successful cyber attack will need to from the outside through the border into your network and exfiltrate data back to the attacker (in the case of cyber espionage).
Looking at emails when they come into the network is an excellent idea and much more than the "scan for virus" technique can be applied. For example, baselining can discover the communication patterns of every person in an organisation. An attack would most probably be outside of this pattern, as the email source is a new one. That would trigger any proper network police force into looking at the email and detecting the malicious payload.
But of course every element "inside" the network border should be "hardened" and it should be expected that some devices will eventually be infected by malware. In this case a useful partitioning scheme will work wonders in containing the threat and giving (some) peace of mind.
pwned : Sad ! and apathetic.
With most of their hardware made in the ch ,and back doors being placed in this hardware
at board level since the 90's ,remember the motherboards and other hardware which flash installed a windows app ?,and that was 10+ years ago.
This places said ":green zone" equipment through a fw like a dose of the salts ,elimating any perimeter defenses.
What the blazes is the mil net doing near the www anyways ; | ffs
The game of misinformation is a good one ,and watch it being played is like a slow game of poker .
+1 ; Levente Szileszky
" Re: The National Socialist Pride Gap
The basic problem is that a PhD could mean a lot or nothing, it really depends on the subject "
We need to down greed and start manufacturing again ,other wise we will permanently transfer the skills and dollars offshore ,which is whats happening .
Only it's quite popular on MBA courses. If a process is within some (stated) tolerance band *ignore* it.
Flag it otherwise.
Note this is not *just* for the perimeter. It should apply to *all* machines, including servers. Is log analysis software *that* difficult to use? Aren't most of them just line orientated *highly* structured text files, or am I missing something. I cannot believe MS "Log Parser" is the *only * tool that can do stuff like this.
The key question to ask is *why*?
Why is that PC sending 5000 emails an hour with *no* user logged in?
Why is program X sending to socket nnnn when no other *copy* of that program does so?
Why has user A requested issued 300k record delete requests on the companies main ERP system?
Sure if you're operation is valuable enough (to *someone*) it's penetration is *inevitable*
But they have to get the loot out. Or the they can't get to the backup to corrupt it. Or they can trash some of the database. *Provided* users programs and processes are adequately boxed in.
Of course this is irrelevant unless *management* recognizes that *proper* security is an expense in both money and *convenience*,. Sorry "Mr CEO but dumping your daughters party pix to the offices A0 printer is no longer an option. You'll have to go to Kwickprint like everyone else."
As a whimsical aside are there *really* people out there who specialize in breaking into adult sites *just* to steal their contents? Do pornjackers really exist?
I guess Perl is much better suited than most other tools for analyzing logs. Development speed is incredibly fast and you can scale it into serious applications, which are well-structured and used object-orientation. Just don't use all the implicit variables and name every variable and procedure/method call in a useful manner.