In light of this, it's high time the NSA's selinux underwent a proper audit.
Just like Microsoft, how can you trust Linux when key security components were designed and coded by these people?
There are red faces in Redmond after Edward Snowden released a new batch of documents from the NSA's Special Source Operations (SSO) division covering Microsoft's involvement in allowing backdoor access to its software to the NSA and others. Documents seen by The Guardian detail how the NSA became concerned when Microsoft …
You mean, an audit above and beyond every line of code being visible to anybody who pulls down the kernel source from git.kernel.org, including about 10 thousand very experienced programmers world-wide, many of whom work for governments not-at-all friendly to the US, who can evaluate the security impact of all that code?
Let me guess: your next post will be about how we have to distrust AES, because "the NSA maded it" (hint: no, the folks who created AES were Belgian mathematicians, and the algorithm was vetted by cryptographers around the planet before the NSA simply said "yes, that'll do, we approve using that").
OK, supposing you work for a government or a corporate not wanting to be seen to be friendly to the NSA.
Supposing you do find a security hole.
There's a choice: report it to the world, or stay quiet. If you stay quiet, you too may be able to exploit that hole. Go public and all the spooks lose the facility. Is the answer still obvious?
Not saying it's a likely scenario. But then none of this was considered likely six months ago, let alone 10+ years ago when the existence of NSAkey slipped out accidentally and was played down by the MS ecosystem.
Stuff goes into Linux without it necessarily being very well understood by anyone other than the people who submit it. (The original XFS for example).
Not that many people even know how to use selinux properly.
Maybe the NSA submitted it without any flaw but as complicated as it is knowing that in the real world most of the time it won't be configured properly.
There is binary blobs in the kernel who knows what are in them.
I wonder if the 9 billion for Skype was just paid for by the US government.
Anyone remember the mysterious
gitBitKeeper push in which a "==" was actually a "=", opening a root access backdoor hey presto?
Software developers on Wednesday detected and thwarted a hacker's scheme to submerge a slick backdoor in the next version of the Linux kernel, but security experts say the abortive caper proves that extremely subtle source code tampering is more than just the stuff of paranoid speculation.
The backdoor was a two-line addition to a development copy of the Linux kernel's source code, carefully crafted to look like a harmless error-checking feature added to the wait4() system call - a function that's available to any program running on the computer, and which, roughly, tells the operating system to pause execution of that program until another program has finished its work.
"That's the kind of pub talk that you end up having," says BindView security researcher Mark 'Simple Nomad' Loveless. "If you were the NSA, how would you backdoor someone's software? You'd put in the changes subtly. Very subtly."
"Whoever did this knew what they were doing," says Larry McVoy, founder of San Francisco-based BitMover, which hosts the Linux kernel development site that was compromised. "They had to find some flags that could be passed to the system without causing an error, and yet are not normally passed together... There isn't any way that somebody could casually come in, not know about Unix, not know the Linux kernel code, and make this change. Not a chance."
On Wed, Nov 05, 2003 at 04:48:09PM -0600, Chad Kitching wrote:
> From: Zwane Mwaikambo
> > > + if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
> > > + retval = -EINVAL;
> > That looks odd
> Setting current->uid to zero when options __WCLONE and __WALL are set? The
> retval is dead code because of the next line, but it looks like an attempt
> to backdoor the kernel, does it not?
For instance, take this well known Latin phrase dating back to Roman times:
"Quis custodiet ipsos custodes?"
Loosely translated, it means 'Who watches the watchers/who will guard the guards' etc.
The obvious solution is no secrecy--make it open to the public. But that's hardly likely. The writer and philosopher Albert Jay Nock pretty well sums up the problem in is 1935 book "Our Enemy, the State".
There's a link to a PDF copy of the book at Wiki.
Quote: "You mean, an audit above and beyond every line of code being visible to anybody who pulls down the kernel source from git.kernel.org..."
To put it bluntly, there are vast swathes of kernel code which are understood by ~ 5-10 people out there. There are whole arch/ trees that have even less people fully understanding all the fine points of how they function.
I have worked with various bits and pieces over the years. In each case, it took me half a year to get up to speed with the (rather small) areas I had to play with. None of them was anywhere near the complexity of SE linux.
So while the idea "it is in the open, someone should have noticed" has some merit, the idea "put some proper pros on it and do a proper audit" has considerable merit as well.
"Not saying it's a likely scenario. But then none of this was considered likely six months ago, let alone 10+ years ago when the existence of NSAkey slipped out accidentally and was played down by the MS ecosystem."
Then you'd better start reading, hadn't you Mr AC.
Funny how these notions always seem to come from a) People posting AC and b) It's always someone else who needs to do it.
Just ditch it, and not just because of the NSA, but primarily because its obscene complexity actually threatens security rather than enforces it, since (as others have stated) so few understand it.
At best this results in distro-provided default policies that may or may not be secure, depending on who was paying attention at the time (and whether or not they have a hostile agenda), and at worst it results in users arbitrarily punching holes in security (or just turning it off completely) that they don't understand, because it's preventing them from getting something done, pretty much just like the way typical Windows users (and application vendors) treat firewalls.
Exactly the same goes for PolicyKit (e.g. the infamous Fedora incident), which has nothing to do with the NSA, AFAIK. In particular, note the hostile attitude of the maintainers toward security, and the users who complained about the lack of it, in the aforementioned example.
IMO "policy" based security is inherently dangerous, and moreover completely unnecessary on any Unix-like system, regardless of whether or not the NSA has any involvement, unless you're prepared to have ultimate trust in the only person who actually understands that policy.
>>"There is binary blobs in the kernel who knows what are in them."
No need to panic.
Those binary blobs are only loaded into some devices as their firmware, not run by the kernel itself. In fact, the same thing Windows drivers often do (except that in Windows, you cannot see the source even for the parts run by the kernel). Any code executed by the kernel in Linux has visible source, unless you use some proprietary binary module requiring abomination, like ClearCase.
If you really want to avoid the blobs (at the cost of losing support for some devices), use one of the fanatically "libre" Linux distribution like gNewSense that configures them out.
SEL is a massive bodge on top of the massive bodge that is the Linux security model. It's 2013, and still Linux can't do basic things like constrained delegation properly, doesn't have dynamic access control, and relies on tools like SUDO that are inherently secure as they run as root.
it is about time that Linux was redesigned to integrate security from the ground up, much like Windows did with the launch of NT.
"integrate security from the ground up, much like Windows did with the launch of NT."
You mean the VMS security model that Cutler took from DEC to MS?
VMS is still available from HP if you try really really hard and don't mind running on an IA64 rather than something relevant.
VMS fundamentals have changed little since Cutler arrived at MS.
VMS is somewhat lacking in what some may consider "modern" features (e.g. unauthenticated code execution exploits, unauthorised privilege escalation exploits, exploitable buffer overflows, etc).
It may not even have a usable NSA backdoor, who knows.
Buy now, while stocks (and expertise) lasts.
> You mean the VMS security model that Cutler took from DEC to MS?
IMPLEMENT IT NAOW.
I have to confess I gave up in SELinux. I have had the item "learn about SELinux" on my agenda for the last 10 years or so but I never find the actual time. And I'm not sure how it will help me.
Tears of distress...
They need to do better. Linux as it is today is insecure by default, not just that. It is bloated and buggy due to version number race that is currently taking place. Fast phase of releases means that more bugs are left unresolved or fixed in next release.
A high version number does not give a quailty and stable code. Far from it.
> its obscene complexity actually threatens security rather than enforces it
It does not.
SELinux does not permit operations that would otherwise be disallowed; it further restricts operations to the contexts in which they should be performed.
In the event that the policy prevents an operation which ought to be allowed - usually because the files in question are local to this machine only, and have not been labelled at all - the operation fails. This is why so many people disable SELinux.
In the event that the policy permits an operation that should be prevented, that operation only succeeds if the underlying DAC permits it; in other words, it is *exactly* the same situation as if SELinux were disabled.
SELinux is very far from perfect, but it does not threaten security.
You can do like the Russians and move back to typewriters:
Otherwise you're (we) are screwed. Anything you send online has to be assumed slurped. It is extraordinarily unlikely anyone will care or even read your data, but that's not really the point is it...
>"It's virtually pointless unless you encrypt everything first."
Then you don't want to read the Wired article on the NSA's new encryption-busting supercomputer and data retention facility in Utah. Standard AES encryption doesn't stand a chance:
ALL YOUR SECRETS ARE BELONG TO US
@Destroy - >"I don't believe that for an instant. You can't decrypt everything all the time."
Told you that you would NOT want to read the article.
Standard AES is vulnerable to the new supercomputers because they can do brute force attacks so much faster. Stronger methods of encryption will still rebuff these sorts of attacks - for now.
How to stop NSA snooping
If you do not want NSA snooping or any other spy agency for that matter.
Go back to old school
Paper and pen hand delivered to the recipient
Ok it’s a bit labour intensive and does not work too well over long distances but “they” do not get to see your stuff.
You could take the risk of using a state run postal service.
Face to face meetings.
@AC 23:52 - >"If you do not want NSA snooping or any other spy agency for that matter.
Go back to old school
Paper and pen hand delivered to the recipient"
Pen and paper was hacked many, many years ago - possibly centuries ago. Simply get the written message from the pen impressions left on the next clean piece of paper on the tablet.
Anyone know of an Outlook.com alternative that I can move my mail to, where my personal life won't be siphoned into a database? Already deleted skype.
First, forget the idea that there is such a thing as absolute security, and certainly that you can come near that for "free" (if "free" means in reality "paid for with personal details", then that it isn't free, but I'll get off that soapbox now). Even companies that protect you may have to open the doors for a warrant, the clever idea is to use a company in a legal system that still works most of the time, because that would mean that the US would be forced to follow due process: a cross-judicial request for assistance relies on the laws of the country the assistance is requested from.
As in corporate security, there is a whole span between "idiotically risky but cheap" (Google, Yahoo, Outlook) to "expensive but fully protected". A possible solution is simply to move things to Switzerland. A host is relatively cheap, and buying a domain through SwitchPlus there means a 3rd party cannot redirect your mail path either (non-US registrar - only leaves root server manipulation, and that's IMHO too big to go unnoticed). If you want to talk to friends safely, you just give them an account on your machine and make sure you use SSL for IMAP and SMTP. It means for about £100 a year you're set, and you can run your own webhost on top.
Although the Americans have been VERY hard at work to make you think it isn't a safe haven for data (because they can't afford you know this), it is a fact that Switzerland is (a) one of the last remaining functional direct democracies and (b) has privacy EXPLICITLY written into its federal constitution. I'll translate that for you: affecting that fundamental a law is very, very difficult because everyone would have to vote on it. An even better translation for that is: any hoster who would read your email without your permission doesn't get a slap on their fingers with a wet noodle, it means jail time. What is interesting is that that entry is actually fairly recent (1999), and thus includes telecommunications.
At the very top end you could get yourself an account with the only setup in the world that sells privacy protected email, which means you'd enjoy legal protection, discretion and security managed by people that frankly scare me (I know some of them), but you pay for that, it's rumoured to set you back at least triple digits annually for the most basic service. It's were Special Forces, celebrities and VIP go since the News of the World hacking affair.
It's actually a good question - I should write an article about this.
Standard AES is vulnerable to the new supercomputers because they can do brute force attacks so much faster.
They need to be much much more faster unless there is some computational shortcut and/or additional information reducing the problem (there may be: AES crypto broken by 'groundbreaking' attack; Faster than simply brute-forcing).
As shown above, even with a supercomputer (50 PetaFLOPS, which is the wrong kind of oomph, but let it rest for now), it would take 1 billion billion years to crack the 128-bit AES key using brute force attack. This is more than the age of the universe (13.75 billion years). If one were to assume that a computing system existed that could recover a DES key in a second, it would still take that same machine approximately 149 trillion years to crack a 128-bit AES key.
I don't know of any surely safe, but I sense some enterprising people might now make millions by creating one...
Yes, but whatever you do, CHECK. I have already seen a number of people and companies that promise a lot, but fail even the most cursory check. A demanding market attracts snake oil vendors like no other. Also beware of tech-only solutions, because technology isn't actually the problem.
In addition, if you're looking for an enterprise-wide solution you're wasting your time if your HQ is in the US. If you move your HQ outside the US it's credible to have a US subsidiary which is simply barred from accessing any other corporate resources than it needs for its business, but if your decision power resides in the US you'll be at the mercy of whoever wants to play with FISA and the USA PATRIOT Act..
You can guarantee they have everything else, If you don’t want The Man ® to have the chance to look at your emails the best bet is to use some Russian, Iranian or North Korean option, then at least the Americans will have to put some effort into spying on you (and being aware that the Russia, Iran or the Norks are already doing it, probably without a court order).
On a side note, as I have stated before I am not really worried, I am under no illusions that they can see what I am doing if they want, I am a realist, anyone who thought this ‘could’ not happen if they wanted it to is very naïve, the surprising thing for me is thy bothered getting any legal backing, and while I don’t subscribe to the ‘if you have nothing to hide…’ mentality the amount of time, money and man power it would take to monitor everyone’s email, phone calls, movements and online presence is ridiculous, it’s more a case of ‘I am so unimportant I really doubt they would bother’. That is until Obama announces his new plan to combat unemployment and the NSA has 100 Million new jobs going, then I will get worried…
Biting the hand that feeds IT © 1998–2018