In light of this, it's high time the NSA's selinux underwent a proper audit.
Just like Microsoft, how can you trust Linux when key security components were designed and coded by these people?
There are red faces in Redmond after Edward Snowden released a new batch of documents from the NSA's Special Source Operations (SSO) division covering Microsoft's involvement in allowing backdoor access to its software to the NSA and others. Documents seen by The Guardian detail how the NSA became concerned when Microsoft …
For instance, take this well known Latin phrase dating back to Roman times:
"Quis custodiet ipsos custodes?"
Loosely translated, it means 'Who watches the watchers/who will guard the guards' etc.
The obvious solution is no secrecy--make it open to the public. But that's hardly likely. The writer and philosopher Albert Jay Nock pretty well sums up the problem in is 1935 book "Our Enemy, the State".
There's a link to a PDF copy of the book at Wiki.
> You mean the VMS security model that Cutler took from DEC to MS?
IMPLEMENT IT NAOW.
I have to confess I gave up in SELinux. I have had the item "learn about SELinux" on my agenda for the last 10 years or so but I never find the actual time. And I'm not sure how it will help me.
Tears of distress...
You mean, an audit above and beyond every line of code being visible to anybody who pulls down the kernel source from git.kernel.org, including about 10 thousand very experienced programmers world-wide, many of whom work for governments not-at-all friendly to the US, who can evaluate the security impact of all that code?
Let me guess: your next post will be about how we have to distrust AES, because "the NSA maded it" (hint: no, the folks who created AES were Belgian mathematicians, and the algorithm was vetted by cryptographers around the planet before the NSA simply said "yes, that'll do, we approve using that").
OK, supposing you work for a government or a corporate not wanting to be seen to be friendly to the NSA.
Supposing you do find a security hole.
There's a choice: report it to the world, or stay quiet. If you stay quiet, you too may be able to exploit that hole. Go public and all the spooks lose the facility. Is the answer still obvious?
Not saying it's a likely scenario. But then none of this was considered likely six months ago, let alone 10+ years ago when the existence of NSAkey slipped out accidentally and was played down by the MS ecosystem.
"Not saying it's a likely scenario. But then none of this was considered likely six months ago, let alone 10+ years ago when the existence of NSAkey slipped out accidentally and was played down by the MS ecosystem."
Then you'd better start reading, hadn't you Mr AC.
Funny how these notions always seem to come from a) People posting AC and b) It's always someone else who needs to do it.
Stuff goes into Linux without it necessarily being very well understood by anyone other than the people who submit it. (The original XFS for example).
Not that many people even know how to use selinux properly.
Maybe the NSA submitted it without any flaw but as complicated as it is knowing that in the real world most of the time it won't be configured properly.
There is binary blobs in the kernel who knows what are in them.
I wonder if the 9 billion for Skype was just paid for by the US government.
Anyone remember the mysterious
gitBitKeeper push in which a "==" was actually a "=", opening a root access backdoor hey presto?
Software developers on Wednesday detected and thwarted a hacker's scheme to submerge a slick backdoor in the next version of the Linux kernel, but security experts say the abortive caper proves that extremely subtle source code tampering is more than just the stuff of paranoid speculation.
The backdoor was a two-line addition to a development copy of the Linux kernel's source code, carefully crafted to look like a harmless error-checking feature added to the wait4() system call - a function that's available to any program running on the computer, and which, roughly, tells the operating system to pause execution of that program until another program has finished its work.
"That's the kind of pub talk that you end up having," says BindView security researcher Mark 'Simple Nomad' Loveless. "If you were the NSA, how would you backdoor someone's software? You'd put in the changes subtly. Very subtly."
"Whoever did this knew what they were doing," says Larry McVoy, founder of San Francisco-based BitMover, which hosts the Linux kernel development site that was compromised. "They had to find some flags that could be passed to the system without causing an error, and yet are not normally passed together... There isn't any way that somebody could casually come in, not know about Unix, not know the Linux kernel code, and make this change. Not a chance."
On Wed, Nov 05, 2003 at 04:48:09PM -0600, Chad Kitching wrote:
> From: Zwane Mwaikambo
> > > + if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
> > > + retval = -EINVAL;
> > That looks odd
> Setting current->uid to zero when options __WCLONE and __WALL are set? The
> retval is dead code because of the next line, but it looks like an attempt
> to backdoor the kernel, does it not?
>>"There is binary blobs in the kernel who knows what are in them."
No need to panic.
Those binary blobs are only loaded into some devices as their firmware, not run by the kernel itself. In fact, the same thing Windows drivers often do (except that in Windows, you cannot see the source even for the parts run by the kernel). Any code executed by the kernel in Linux has visible source, unless you use some proprietary binary module requiring abomination, like ClearCase.
If you really want to avoid the blobs (at the cost of losing support for some devices), use one of the fanatically "libre" Linux distribution like gNewSense that configures them out.
Quote: "You mean, an audit above and beyond every line of code being visible to anybody who pulls down the kernel source from git.kernel.org..."
To put it bluntly, there are vast swathes of kernel code which are understood by ~ 5-10 people out there. There are whole arch/ trees that have even less people fully understanding all the fine points of how they function.
I have worked with various bits and pieces over the years. In each case, it took me half a year to get up to speed with the (rather small) areas I had to play with. None of them was anywhere near the complexity of SE linux.
So while the idea "it is in the open, someone should have noticed" has some merit, the idea "put some proper pros on it and do a proper audit" has considerable merit as well.
Just ditch it, and not just because of the NSA, but primarily because its obscene complexity actually threatens security rather than enforces it, since (as others have stated) so few understand it.
At best this results in distro-provided default policies that may or may not be secure, depending on who was paying attention at the time (and whether or not they have a hostile agenda), and at worst it results in users arbitrarily punching holes in security (or just turning it off completely) that they don't understand, because it's preventing them from getting something done, pretty much just like the way typical Windows users (and application vendors) treat firewalls.
Exactly the same goes for PolicyKit (e.g. the infamous Fedora incident), which has nothing to do with the NSA, AFAIK. In particular, note the hostile attitude of the maintainers toward security, and the users who complained about the lack of it, in the aforementioned example.
IMO "policy" based security is inherently dangerous, and moreover completely unnecessary on any Unix-like system, regardless of whether or not the NSA has any involvement, unless you're prepared to have ultimate trust in the only person who actually understands that policy.
> its obscene complexity actually threatens security rather than enforces it
It does not.
SELinux does not permit operations that would otherwise be disallowed; it further restricts operations to the contexts in which they should be performed.
In the event that the policy prevents an operation which ought to be allowed - usually because the files in question are local to this machine only, and have not been labelled at all - the operation fails. This is why so many people disable SELinux.
In the event that the policy permits an operation that should be prevented, that operation only succeeds if the underlying DAC permits it; in other words, it is *exactly* the same situation as if SELinux were disabled.
SELinux is very far from perfect, but it does not threaten security.
SEL is a massive bodge on top of the massive bodge that is the Linux security model. It's 2013, and still Linux can't do basic things like constrained delegation properly, doesn't have dynamic access control, and relies on tools like SUDO that are inherently secure as they run as root.
it is about time that Linux was redesigned to integrate security from the ground up, much like Windows did with the launch of NT.
"integrate security from the ground up, much like Windows did with the launch of NT."
You mean the VMS security model that Cutler took from DEC to MS?
VMS is still available from HP if you try really really hard and don't mind running on an IA64 rather than something relevant.
VMS fundamentals have changed little since Cutler arrived at MS.
VMS is somewhat lacking in what some may consider "modern" features (e.g. unauthenticated code execution exploits, unauthorised privilege escalation exploits, exploitable buffer overflows, etc).
It may not even have a usable NSA backdoor, who knows.
Buy now, while stocks (and expertise) lasts.
They need to do better. Linux as it is today is insecure by default, not just that. It is bloated and buggy due to version number race that is currently taking place. Fast phase of releases means that more bugs are left unresolved or fixed in next release.
A high version number does not give a quailty and stable code. Far from it.
>"It's virtually pointless unless you encrypt everything first."
Then you don't want to read the Wired article on the NSA's new encryption-busting supercomputer and data retention facility in Utah. Standard AES encryption doesn't stand a chance:
ALL YOUR SECRETS ARE BELONG TO US
@Destroy - >"I don't believe that for an instant. You can't decrypt everything all the time."
Told you that you would NOT want to read the article.
Standard AES is vulnerable to the new supercomputers because they can do brute force attacks so much faster. Stronger methods of encryption will still rebuff these sorts of attacks - for now.
Standard AES is vulnerable to the new supercomputers because they can do brute force attacks so much faster.
They need to be much much more faster unless there is some computational shortcut and/or additional information reducing the problem (there may be: AES crypto broken by 'groundbreaking' attack; Faster than simply brute-forcing).
As shown above, even with a supercomputer (50 PetaFLOPS, which is the wrong kind of oomph, but let it rest for now), it would take 1 billion billion years to crack the 128-bit AES key using brute force attack. This is more than the age of the universe (13.75 billion years). If one were to assume that a computing system existed that could recover a DES key in a second, it would still take that same machine approximately 149 trillion years to crack a 128-bit AES key.
Unless I misunderstand badly, to a first approximation, brute force recovery of a 256 bit key based on known plain text would be expected to take on the order of 2^128 ~ 3*10^34 operations. If you can do 10^15 per second (probably a stretch even for NSA) that would come down to 3*10^23 seconds, or about 10^16 years, a few orders of magnitude larger than the current age of the universe. Brute force seems likely to be a poor choice.
There may be undisclosed vulnerabilities in AES, and if so NSA may know them. Even more likely, there may be vulnerable (poor) AES implementations that make key recovery easy. And most likely, there are vulnerable systems that can provide access to either the key or the actual message.
You can do like the Russians and move back to typewriters:
Otherwise you're (we) are screwed. Anything you send online has to be assumed slurped. It is extraordinarily unlikely anyone will care or even read your data, but that's not really the point is it...
No, we don't have to be defeatist and go back to typewriters. Snowden successfully communicates using encryption, its just a case of adding encryption to everything and using the OS he uses (linux?).
There are problems, but they can be fixed.
Email encryption is a weak spot. Missing is a simply OTR first time key exchange for end to end encryption of email. NSA scoops it all up and reads it. Identity documents, private comms, saucy pictures of your GF's, you name it they spy on it.
We just need to be more on guard and generate open verifiable encryption standards and move away from the known backdoor US tech and UK comm transits.
How to stop NSA snooping
If you do not want NSA snooping or any other spy agency for that matter.
Go back to old school
Paper and pen hand delivered to the recipient
Ok it’s a bit labour intensive and does not work too well over long distances but “they” do not get to see your stuff.
You could take the risk of using a state run postal service.
Face to face meetings.
@AC 23:52 - >"If you do not want NSA snooping or any other spy agency for that matter.
Go back to old school
Paper and pen hand delivered to the recipient"
Pen and paper was hacked many, many years ago - possibly centuries ago. Simply get the written message from the pen impressions left on the next clean piece of paper on the tablet.
Biting the hand that feeds IT © 1998–2019