Post a comment?
Are you mad? I wouldn't dare post a comment here in case the NSA had a backdoor...
Linux supremo Linus Torvalds has jokingly admitted US spooks approached him to put a backdoor in his open-source operating system. During a question-and-answer session at the LinuxCon gathering in New Orleans this week, Torvalds and his fellow kernel programmers were asked by moderator Ric Wheeler whether America's g- …
If Spooky Five Eyed Monsters have a backdoor into here on El Reg, they be desperately slow to realise what they should be doing to control the future with IT and media .... and that would indicate that they do not have the intelligence in-house to make good and better beta use of that which be shared freely in comments made here oft and at times on these leading tales with following threads.
More Herman Munster monster types than big scary master race types, methinks. ...... http://memecrunch.com/image/50915d72afa96f2b9a00006f.jpg
Is it mad or smart to post here if one would expect/hope/suspect/invite spooks to phish here and poach/net/capture some prime game with the simplest of ignorant megabuck lures/jackpot lottery prizes, which have no one asking awkward questions about instant flash wealth for services to be rendered .... or not to be rendered but to be held in temporary abeyance until such times as intelligent things will not collapse dumb systems and algorithms in a global flash crash?
So, he was joking then? Which would seem to make sense, if I were going to put a backdoor into an open source product, I'd compromise a distribution rather than the source.
How many people compile from source and then compare the binaries that they've compiled with the ones supplied by the distribution? I'm guessing in the region on none, there are simply too many variables which would end up with fractionally different binaries.
Many a true word spoken in jest.
He answered EXACTLY as I'd have answered in his position if I HAD been whispered to. Gave us as honest an indication as he could without breaking any secret court secret orders. Doesn't mean he yielded of course, just that it is (or would be) up to us to find them. (Try doing that with Windows/iOS/OSX/etc..)
"If I were the NSA, I would just have the "right" people placed in a company like RHEL, where the compiler could be doctored, and the doctored binary and clean source code could then be distributed.
Any recompile would, of course, inject Trojan horse code - regardless of how closely the source was inspected: Neither the compiler source, nor the project source code would contain any evidence"
But they'd also have to dodge an independent compile using another toolchain's compiler: one outside NSA control.
In the end, a compiler could probably be vetted a few times, down to the machine code, and its binary code hashed a few ways (just in case the spooks have a way to create a preimage trojan for one of them--it would be statistically infeasible to tamper with the code AND match the hashes on two different hash familities). Once that could be verified, then you can compile against that one and establish a chain of trust that shows the code wasn't tampered without it showing up in the source. I don't think we're at the stage were we need such anal retention YET...but it's still an option.
It does mean "no" but more of a question, 'no?' . I had an Indian colleague who although the lead architect insisted on requesting agreement at almost every sentence 'no?'. Tried to stop her doing it as it rather undermined her own arguments showing what appeared to be a lack of confidence 'no?'. It all comes down to the politeness of Indian society.
"He nodded his head", "He shook his head" is YES and NO. In the plus 50 countries have been to, this is the only thing that never fails. It would surprise me if Bushmen and what ever you could find in South America or the Pacific Islands did not understand the difference. Either you are a troll, stupid or just uneducated. Perhaps not totally uneducated as you managed to write Bulgarians if not Finns.
As for this interview, it was honest, no company advertising, imagine a third sofa with Ballmer and Elop taking part.
>There's really only about three distributions, which are used as a forking point for most of the others. Red Hat, Debian and Suse, I'd go with that.
Well make it RedHat, Suse and xBuntu and you have 90+ percent of the Linux boxes compromised. And those are the packages most likely used "as is" rather then checked and recompiled because they target the "I want to USE the maschine not fiddle with it" part of Linux.
Add in that all three are done by companies and there are legal means to get a company to "play nice and shut up" and the chances increase that there is "something nice" in there. With the Kernel Devs none the wiser.
If I am going to the bother of compiling the binaries, why would I not simply use them, as in Gentoo? If I did either, could I be confident that nothing was missed in my code examination? What about the compiler, the linker, and so on? If I compared results to the distributor's, how would I decide whether a difference indicated a fault in the distributed binary, the source, or simply noise introduced by differences in the two Make environments?
The question ultimately resolves to one of trust: how far shall I trust the kernel and other developers, knowing that they are fallible and conceivably corruptible humans not all that different from me? Should I reckon them more or less trustworthy than those of Microsoft, Apple, or Google? Why?
For that matter, why should I consider as The Guardian, Spiegel, The New York Times, the Washington Post, or even The Register more trustworthy than the US and UK governments and their accomplices in Canada, Australia and New Zealand? I have little personal knowledge of any of them, and all of them, whether government or press, may have motives for shading or spinning the truth. The documents I have seen are worrisome for sure, but are open to a range of interpretations not all of which support a claim that the governments are much interested in imposing a totalitarian regime. But are these documents to be considered trustworthy as given, inasmuch as they have an unverified history that depends on the questionable trustworthiness of a single individual?
NSA dont have to ask Linus Torvalds himself anymore. They can just submit a patch, because there are so many patches into Linux all the time, it is hard to check all new code. Apparently, this attempt was blocked. But how many more are not blocked? In Windows, NSA can not submit a patch, so NSA must ask Microsoft to deliberately insert a patch. But for Linux, there is a very high code turnover, so it is not hard to submit some new code:
"If you were the NSA, how would you backdoor someone's software? You'd put in the changes subtly. Very subtly."
"Whoever did this knew what they were doing," says Larry McVoy, founder of San Francisco-based BitMover, which hosts the Linux kernel development site that was compromised. "They had to find some flags that could be passed to the system without causing an error, and yet are not normally passed together... There isn't any way that somebody could casually come in, not know about Unix, not know the Linux kernel code, and make this change. Not a chance."
You don't have to manually check all the patches, regression tests & unit tests can do that stuff for you more reliably (and in the case of wait4 a priv-escalation test should be pretty easy to engineer). Sufficiently motivated users can implement their own tests too.
Having said that writing your own tests against closed-source software is often a lot harder because typically it is insufficiently documented for you to infer what the valid states/inputs and outputs of the system are.
I was worried by the fact that vendors place backdoors in software as a matter of routine (eg: default admin accounts+passwords) *before* the Ed Snowden decided to strip away any of the comfy illusions I had about the surveillance regimes we operate under...
"...Strange then that [all code] does get checked..."
Sure it gets checked. But the point is that it is not checked thoroughly. It is only skimmed and lot of subtleties are not catched. There are question marks in the code that gets accepted, because the code turn over is so high, no one can thoroghly check all code. Lot of code that no one really understands gets accepted. Maybe they contain subtle back doors?
"....Lok Technologies , a San Jose, Calif.-based maker of networking gear, started out using Linux in its equipment but switched to OpenBSD four years ago after company founder Simon Lok, who holds a doctorate in computer science, took a close look at the Linux source code.
“You know what I found? Right in the kernel, in the heart of the operating system, I found a developer’s comment that said, ‘Does this belong here?’ “Lok says. “What kind of confidence does that inspire? Right then I knew it was time to switch....”
“You know what I found? Right in the kernel, in the heart of the operating system, I found a developer’s comment that said, ‘Does this belong here?’ “Lok says. “What kind of confidence does that inspire? Right then I knew it was time to switch....”
Specifically to switch to giving stupid interviews to Forbes to increase the visibility of his obscure company.
The kernel has many comments about whether code could be improved and this is probably one of them. Logically, Lok is saying that coders should write comments as if they were writing marketing copy.
Aside from anything else, if Lok was competent to be writing code he should have been able to answer the question himself.
Frankly, I'd be more worried if the code *didn't* contain comments as such. There's no such thing as perfect code. Sometimes what you're writing seems pretty damn good, but sometimes there's a question mark about the better approach to take to solving a problem or its organisation. "Does this belong here" is a perfectly good comment to place by code. A more experienced coder may see the comment and think "Hmmm, no, I'll move it elsewhere and explain in the commit message my reasoning". Without the comment, probably no-one is going to review it and it'll be left there forever.
Given that "perfect" code is a highly subjective affair and given that time constraints exist, the search for perfection is fairly futile and not productive. "Better" is better than "Not better", so if a clear improvement is there to be made, subject to one or two doubts, it should be implemented, with a comment explaining the doubts so it can be picked up for further improvement down the line.
Linus is wrong about Chipzilla. It contributes nothing further to the randomisation if it has a predictable sequence. It's like wrapping an already random stream in see-through paper, that's all. It can't add further entropy if it is no longer usefully randomising. Dunno why he doesn't get that point. Using it just wastes processor cycles.
The issue was not about wasting cycles - it's about whether it can *reduce* entropy. Linus thought this was absurd because even if the data was not random, it wouldn't reduce entropy. That's true so long as the data is produced without any knowledge of the other random data it will be combined with - but the sufficiently paranoid observe that we can't check that's the case.
There is a very easy way to check: just plot or interpolate to expression the output of this supposed random generator.
I used to write random number generators and test them, way-back-when, and the first thing I would do was plot it - human brain is very good at seeing patterns. I was amazed how difficult it really is to create a random number (it's impossible, basically - but you can get closer by degrees and there are now good mathematical models for it - if they aren't NSA'd too that is ;)
"human brain is very good at seeing patterns."
The human brain is indeed a pattern matching engine. That is how it works.
Unfortunately it even sees patterns when they really don't exist. That is how we end up with superstition.
Pseudo-random numbers, as used by GPS, look entirely random and would not be caught by your plot test.
"Linus thought this was absurd because even if the data was not random, it wouldn't reduce entropy. That's true so long as the data is produced without any knowledge of the other random data it will be combined with - but the sufficiently paranoid observe that we can't check that's the case."
Given most of the other inputs to /dev/random (the true RNG stream) are environmental, they'd have to subvert the environment to a great degree to be able to know the state of even one of the input streams to the point of being able to counter it.
And there are other true random sources of bits besides radioactive decay. You can use a reverse-biased transistor, shot noise, avalanche noise (this is what the Entropy Key uses), and so on. Then there are projects like HAVEGE that emply the hectic, multitasking nature of modern CPUs to draw entropy.
It contributes nothing further to the randomisation if it has a predictable sequence
Every pseudorandom number generator in existence has a predictable sequence (hint: that's why they have pseudo- in the name). However, because exploitation of said sequence is dependent upon knowledge of the initial seed value, simply seeding a PRNG with data from a local nondeterministic source practically negates any advantage. Even the compromised Intel PRNG would not be easily exploited unless the application used a deterministically-determined seed value and output a large sequence of unmodified values from the PRNG.
So provided the PRNG is seeded from a non-deterministic local source which is mathematically independent from the other inputs, using its values would contribute to the randomization.
Every pseudorandom number generator in existence has a predictable sequence
True but "predictable" covers a massive range from "bleeding obvious" to "almost totally incomprehensible to anything less than a Culture ship Mind" and all points between.
As far as I know nuclear decay is the only easy genuine random source available but a bit tricky to include in a little chip at a suitably low cost.
"As far as I know nuclear decay is the only easy genuine random source available but a bit tricky to include in a little chip at a suitably low cost."
What about Americium-241 based smoke detectors? We already have widespread, low-cost "nuclear" gear in our homes in this form - why couldn't we use this same technology as an RNG in our computers as well?
As far as I know nuclear decay is the only easy genuine random source available
Completely wrong. Others are thermal noise (in analogue electronics), and turbulence (in airflow). Your audio input jack and hardware can be a very effective random source. For best effect, connect a thermal noise source instead of a microphone: it's trivial to build one from a few discrete electronic components, and power it off a USB port.
But even a microphone listening to background noise will do. Even if the spooks have a hi-fi uncompressed bug in your office, it won't be recording exactly the same audio stream. The least significant bit per sample will be random, which is quite a reasonable source of entropy to blend into an entropy pool. (If you stick your random noise microphone to your PC's fan grille, it'll be more than one random bit per sample).
Finally, for an entropy pool you don't need random in the sense of passing all statistical tests for a random source. It just has to be non-reproducible and not remembered by anything. So the "signal" bits of the background noise in your office also qualify to a greater or lesser extent.
Every pseudorandom number generator in existence has a predictable sequence (hint: that's why they have pseudo- in the name).
No, this is yet another example of equating two distinct concepts - determinism and randomness. It's possible to be completely non-deterministic but still not random. It's the kind of subtlety which is why encryption and related areas are best left to real experts as opposed the "I read a web page once" types. I certainly no expert either but I have studied deeply enough to realise it is a lot more complex than people generally assume.
A simple everyday example of the difference between the two would be what was found shortly after the introduction of the Euro coins: because the two faces are designed independently (one centrally and one on a national basis) certain Euro coins are not perfectly balanced and so have a slight preference for one side or the other when tossed. The result of an individual toss is still essentially unpredictable but in the long-term a marked bias shows up.
Tossing such coins thus yields a pseudorandom sequence, even though the individual values remain entirely unpredictable.
A biased sequence can still be random in some senses. The comment to which you reply is correct: all pseudo-random sequences are deterministic, because that's what the term means: a pseudo-random sequence is an algorithmically produced sequence which passes whatever your favourite statistical tests for randonmess are.
You're correct that non-determinism and randomness are different: in mathematical modelling of systems, a "non-deterministic" choice is one that is not made by the system of interest, but by its environment: e.g. a vending machine has a non-deterministic choice between receiving a "tea" button press and a "coffee" button press, as which one happens depends on the environment (the user). In theory of computation, a non-deterministic algorithm really means one where all possible choices are explored in parallel; or alternatively, you can take a lucky guess as to which choices you should make.
"Random" refers either to statistical properties of a sequence - and such a sequence can be a determined thing, just not determined by any computable function - or to a primitive notion of probability. In the probabilistic case, biased outputs are included: for example, if you generate a sequence of bits every second by seeing whether an atom of uranium has decayed in that second, that sequence is, as far as we know, truly random in every probabilistic sense of "random". The ratio of zeroes to ones, however, depends on how much uranium you have. In the algorithmic case, a biased sequence is not random becase it can be compressed - if the string has ten times as many zeroes as ones, then you can trivially compress it by coding sequences of zeroes, and you'll win - but it can still be a random sequence in the probabilistic sense.
Cryptographers want sequences that are random in both senses.
"A biased sequence can still be random in some senses. The comment to which you reply is correct: all pseudo-random sequences are deterministic, because that's what the term means: a pseudo-random sequence is an algorithmically produced sequence which passes whatever your favourite statistical tests for randonmess are."
In isolation a biased sequence can't be considered to be random. To quote Knuth, "A distribution is generally understood to be uniform unless some other distribution is specifically mentioned" (TAOCP vol 2 section 3.1).
As for the meaning of pesudorandom, that simply implies an approximation of randomness. It says nothing about how the sequence in generated.
Surely, in the case you present, the *sequence*, e.g. HTHHTHTTTH.... is random, but the *probability* of the next toss being H or T is slightly biased. The point about a pseudorandom generator algorithm is that it's entirely and absolutely predictable. If you set up two, side by side with the same starting parameters, they'll produce exactly the same sequence which however looks (more or less) random.
That's not the case with the Euro coins. Even in a high-precision coin-tossing machine in a vacuum (patent pending) the sequence is going to vary based on unpredictable (read: random) variables.
This, by itself, is the case for open source operating systems such as Linux. They *can't* put a back door in, because it would be quickly spotted by everyone who audits the kernel source (and the rest of the source that makes up a Linux operating system -- yes, we call that Linux too, not silly names like GNU).
It also pretty much proves that every major closed source operating system absolutely has government back doors in it. If you use Windows, the government has the key to your computer. If you use Apple, the government has the key to your computer. If you use Linux, the government at least has to crack your encrypted communications first.
Any possible backdoor would be done in such a way as to appear as a bug that grants privilege escalation for plausible deniability. And we all know Linux is notorious for its many, many bugs that the kernel devs should classify as security vulnerabilities, but refuse to do so.
I see you attempted to backdoor this thread and I caught you out.
There are examples of bugs/backdoors/whatever you call them lasting years in open source projects because nobody caught them.
I'm quite the open source fanboy but I wouldn't go so far as to think it's perfect. Mainly my higher confidence in open source programs is that someone else will find the problems, but if everybody's thinking that...
"They *can't* put a back door in, because it would be quickly spotted by everyone who audits the kernel source (and the rest of the source that makes up a Linux operating system -- yes, we call that Linux too, not silly names like GNU)."
So what would happen if someone did spot something out of place in the kernel source?
Wouldn't it be fair to say that if that person starts asking on the kernel mailing list they'll just get ridiculed and optionally insulted for not understanding the module they're commenting on?
Wouldn't it be fair to say that if that person starts asking on the kernel mailing list they'll just get ridiculed and optionally insulted for not understanding the module they're commenting on?"
If someone did spot something they would get a fair audience if they could demonstrate the issue or give a reasonable explanation.
They would get ridiculed if they continued to press a point that could only be explained by six rolls of tin foil.
There are enough Stallman-esque purists within the Linux community who would be quick enough to expose any backdoors within it. With a closed source OS, even if the individual programmers try to refuse to add them, if the pressure comes from 'head office', they will either have to implement them, or go job hunting. Linux isn't perfect (what is?), but it employs a damn site better model than the alternatives.
Jesus effin' Christ - Debian generated useless pseudorandom numbers for almost year and a half.
NOBODY spotted the gaping bug for >months<.
No, it is >not< possible to guarantee that software is 100% backdoor-free - open or closed, it does not matter.
Linux, like any modern OS, is full of vulnerabilities (Windows is not better, neither is Max OS X). Some of these vulnerabilities >might< be there on purpose.
The only thing you can do is to trust nobody and do the best security practice - limited user rights, firewalls (I would not even trust just one vendor), regular patching, minimal open ports on the network, etc. etc.
and the case against too. Linux management, review etc. is somewhat random, not systematic, not verifiable and with no recourse for a customer burnt by a problem (unless they subscribe to a Redhat or SUSE or similar support/professional contract and then only limited). One of the reasons for BSD variants being considered more secure is that they have a genuine review system that does not rely on one, strong individual driving it.
It assumes that the world is full of keen Linux/UNIX engineers with time, interest and, most importantly, genuine ability, extensive experience, maturity and understanding. I can assure you that, for every dozen claiming that, you will be lucky to find one. Then you need to hope that person understands the subject being programmed, its interactions with other components, the libraries it uses and is really well versed in the nuances of the programming language being used. Security is very, very specialised. Even experts make mistakes or miss things or just fail to guess all of the possibilities; that is why Microsoft, Apple, Adobe, IBM and a thousand others are issuing security patches regularly and often for mature, heavily used systems. Bear in mind that many, if not most, of the Linux contributors work for such companies. Self accreditation does not count. It is really easy to obscure code. Nothing in any operating system or language protects against that.
Of course, perhaps I am wrong and you can give me details of the review board, standards, test system, documentation standards, certification and so on.
No, it's a pretty world that you imagine. It just is not this world.
My point was that your pervious comment said that this was a discussion for adults, then you descended into name calling. And now you've done it again.
I have a perfectly valid reason for posting AC - I don't want people to know who I am. I used to post as my handle until someone said they thought they'd worked out who I was from my posting history and would test my personal security and that of my employer. I'm never posting with my handle again.
I also work for a company who've historically been one of the most important to Linux and they don't like their employees commenting on the Internet. Again another reason to post AC, particularly in a thread about Linux.
"this was a discussion for adults,"
Just reiterating the tired old stuff about "And we all know Linux is notorious for its many, many bugs that the kernel devs should classify as security vulnerabilities, but refuse to do so."
makes you sound VERY like an AC who posts here repeatedly just using the same words without giving any evidence as to the number of serious or important vulnerabilities.
"Except that anyone who knows anything about secure development practices will know that the 'many eyes' theory is a load of BS because most devs don't know what they're looking for let alone how to fix it."
Sure, it requires some specialist aptitudes and attitudes, but the fact is the closed source is going to have less eyes on it by definition, and it will have less skilled eyeballs. Folks who are genuinely interested and skilled in this stuff contribute to Open Source - particularly with respect to networking and a lot of other Internet tech we take for granted.
Open Source networking code is usually worked over by the folks who set the standards, do the first implementations, and work for vendors on the same stuff. It's not unusual for a bit of code to have been worked over by several boffins employed by multiple vendors, in fact it is so common place we take it for granted.
Still mistakes happen etc, but in practice I think I'd prefer code that has been worked over by a lot of people who have their work appraised by competing peers who are also skilled in the art. Big gene pools tend to produce stronger more versatile offspring. Closed source gene pools are so small the code produced often has the look of a mistreated inbred by comparison.
They *can't* put a back door in, because it would be quickly spotted by everyone who audits the kernel source
ah ... except for the binary drivers then, that run also in kernel space.
And who provide binary drivers ? Network card suppliers like Broadcom for example. Now, you have a system connected to a network, and the first-line device is a black box supplied by a company subjected to NSA authority !
Call me paranoid.
Everyone that audits the Kernel?? lol
Look, there is now way on earth the hundreds of thousands of lines of C code (some of it still really, really rough) have been security audited. Maybe most of it now is, before it goes in, but make the change subtle enough and I still think you could get a back door in.
There is stuff in the kernel that has been there for donkeys years and people hardly even know how it works (like the early boot up process to set ring levels).
I hate to admit it, but I am pretty sure the NSA already have multiple back doors.
I doubt the vulnerability is introduced via a single patch named 'Backdoor_V0.1'
Inserted in a number of genuine bug fixes, over a period of time, are some apparently innocuous lines of code that when combined enable the back door. The people who write this stuff are apparently quite skilled at what they do.
That said, I'd attack the common element between the three main desktop platforms. X86 architecture. Is your Linux installation running on an open source chipset? Your network interface? How about your GPU, that's a fair bit of grunt not under your direct control. To obsess on OS security rather misses the point.
The argument that backdoors would be quickly spotted by source audit is good, but weakening over time. Specifically, when the number of changes is high, or a large auditing group actually turn out to compartmentalize audit amongst specialists, things can get through. It doesn't matter if the audit group is a 100-person strong if all the graphics drivers get reviewed by two people.
For example, if I was going to submit a backdoored piece of code, I would pick an area of the kernel that got a lot of churn (commits), and hide it in the noise. I'd also put it in an area where only a few auditors were known to specialize.
Without wishing to start a religious war, the OpenBSD guys recognized this a long time ago and instituted very rigorous code review by a very small number of people. You pay a price in terms of what you get out of the box on that OS, but if what you are doing is important, it might be worth the price.
everyone who audits the kernel source"Yes and no.
Or indeed, make sure you get enough of your guys into a position of audit.
Actually, the other presumption is that a backdoor would be recognisable - the best kind of backdoor would simply be encouraging use of a 'broken' algorithm - i.e. encouraging use of DES based encryption, after you know how to crack it.
Which is sort of the paranoia around RdRand - a presumption that the NSA can crack it, despite it passing every known test of randomness going.
What seems daft to me is having that level of paranoia about RdRand, while retaining a level of confidence that the encyryption and PNG algorithms you are using are not flawed - because you can see the code.
Especially when the history of cryptography say that there is almost certainly a flaw.
And that the smartest decryptographers are often working for the government, not against it. Not to mention that the best thing any decryptographer can hope for is that his opponnent has an over-weaning confidence in the strength of his encryption system.
True enough, with an additional bonus I should think ...
NSA: May we have the source code for xyz.dll ?
Microsoft: (indignantly) Of course not ! We at Microsoft ... (ten minutes later) ...
NSA: Ok then can we have a copy of Build # 1234567 of xyz.dll ?
Microsoft: Don't know why you want it, it's 3 years old, but here it is.
NSA: I really wanted the source code ... (spook grumbles to cover unconscious laughter)
Microsoft: Well, NEVER ASK FOR SOURCE CODE AGAIN, Buster!
This is how it works, and it ain't no Enigma. Spook leaves with long run of known plain text. The Germans made the same mistake.
High entropy != random
The problem with intel/nsa TPM "random" data is that it's pretty much IMPOSSIBLE to distinguish real random streams from good pseudo-random streams. Unless you already know how the PRNG works.
Intel/nsa CLAIM to have obfuscated a true random stream within a pseudo-random AES cipherstream. How can anyone other than intel/nsa ever corroborate that CLAIM?
How do you open-source a chip schematic. Plus if the chip makers were true genii, they'd have accounted for the possibility of someone decapping or otherwise stripping the chip down to the circuits and trying to trace them (on the assumption that a truly determined adversary, say another state, would try to identify or subvert it) and simply made it so the chip fried and was useless on any attempt.
No. Not really. Least not for millions. Or billions.
Even an old clunky basic model WWII electro mechanical Enigma machine would appear random up to a trillion or so characters. Modern crypto is *way* more random.
Except, of course, if you have inside knowledge...like what we got by capturing an Enigma machine and were then able to work out that there were a limted number of seed values (a few thousand?) - and hence break the apparently random stream.
BTW: this isn't exactly a new issue:
It seems that developers are informally sounded out about the possibility of placing secret access to spooks in their technology before the discussion goes any further on the technical details and requirements. Once a programmer snubs the feds, the g-men back off, it's believed....
And then, when your company is involved in any government-associated work, either prime or sub-contracted, or is involved with any client who is involved with any such work, the developer's careers seem to undergo a sudden reversal...
The pressure on Biddle came primarily from FBI agents who said they needed a skeleton key, of sorts, to easily break the crypto on suspects' computers in child-abuse investigations, allowing the locked-up data to be examined....
I assume that Mr Biddle will shortly be appearing in front of a court to answer charges of aiding and abetting paedophiles. Or terrorists....
...you don't hesitate to call for some people to "horribly die in a car accident", but you are scared like a baby chicken to honestly, straithly tell the truth about shady demands from the government ?
That means: You are their tool. Go back under the rock you share with Steve Ballmer.
This is good, I like this.
How long until someone figures out, or leaks, the nature of these supposed backdoors? If there *IS* a backdoor in a piece of software, and someone is sufficiently determined to find it, it will be found, and exploited, and then the guns are no longer merely in the hands of law enforcement...
The thing about Linux is that merely glancing at the source code may not reveal anything.
A deliberate flaw in one module may be chained to a deliberate flaw in another, and another, and so on. Statically, it looks benign in code, but when it runs and all of these flaws manifest themselves in a running system...
Putting in some kind of elaborate backdoor which isn't seen to exist when the code is at rest isn't such an absurd idea.
"Putting in some kind of elaborate backdoor which isn't seen to exist when the code is at rest isn't such an absurd idea."
So you run a distro in a VM with all sorts of analyticial tools to see if any unusual activity is going on. It can't be triggered directly from outside because you could firewall that but any unusual outgoing activity could be caught and dissected.
...if he'd given us a straight "yes" or "no". You know, of the sort that comes back to bite you later if you've told a lie and then got found out? As it is, he has plausible deniability about lying, because no-one can really decide if he said yes or no.
Or can they? Anyone like to actually commit to whether they thought Torvald's answer meant yes or no?
"Spooks can compromise these supposedly secure communications by gaining access to the root certificates and encryption keys"
That would only be if the CA allowed it. A company could very easy run their own PKI infrastructure and thus not need the use of a CA. Then the spooks wouldn't have access to the root certificates nor the encryption keys.
Don't worry about it unless you're about to compromise yourself, unless you've had a high security clearance, at which point you find out the rules--I wasn't actually discharged from the Navy. HP in the early 90s was quietly told to put back doors in all internet servers just like Cisco did; it hit the news once that I saw. Backdoor=command level password & hardware, no details were given.
Biting the hand that feeds IT © 1998–2019