Re: "Android gets low-key security update"
And why others have never purchased a "smart" phone for ourselves at all...
478 posts • joined 23 Jan 2015
And why others have never purchased a "smart" phone for ourselves at all...
Mail-in voting fraud is rarely conducted after the ballots arrive. Lots & lots of fun is had, however, before then.
The problem was not "Florida". The problem was one specific county in Florida. The same county that had Federal indictments for voter fraud in 1996.
Moreover, a statistical analysis of the "hanging chad" & multiple vote problems was strongly consistent with what would be expected if you took a stack of ballots that had already been voted and punched "Gore" through the stack.
Like I said, anything beyond paper ballots (and by that I meant grease pencil marked) is basically a recipe for fraud.
If you hack the data behind the website, you hack the website "for free".
And... I remember an investigation where someone familiar with the technology showed just how easy it was to "fix" the results on those mechanical voting machines.
JUST SAY NO.
If it's not fill-in-the-oval-just-like-you-did-for-twelve/sixteen-years-of-school, it's creating opportunities for fraud.
---- virtualisation and multi-threading have both been regarded as security risks since their inception.
-- Regarding virtualization, I can't remember anyone running around telling people "don't use this!"
You missed the point. "Risk" is not the same as "vulnerability". Lots & lots of work has gone into driving the risks of virtualisation into the dirt. And there were discoveries & recoveries from various side channels involving virtualization in the '90s. I don't know if any of them were discovered post-GA.
Certainly, Intel's attitude toward their customers and end users has been, to put it lightly, problematic. But the tone, and even some of the factual claims, are over the top.
If consumers cared about security, we would be able to buy smart phones from RIM. The fact that the Blackberry went from almost being first in market to exiting the retail market completely means that the customer has spoken. And for decades, the customer said, "We don't care if others can snoop, we want the new shiny, and we want it NOW." Even today, if you went out on the street and did a poll, I would be shocked if 1% of consumers had any idea about these bugs.
So, a year ago, some academics managed to exploit a weakness that every consumer chip for the previous twenty years has had, and that the entire industry was aware of. Certainly, this is a big deal in the sense of a lot of work needs to be done. And today, some more of that work has been exposed. But in terms of evil corporate behavior, Intel's abuse of monopoly wrt especially AMD is far, far worse.
If Blackberry had 10% of the consumer market, this sort of tone might (might) be appropriate. But--THERE. HAS. BEEN. NO. DEMAND. Intel is a company. The customer is king. Always.
Mr. "I spent a decade doing microprocessor validation at AMD and IBM starting in the mid-nineties" here.
`Horses for courses" is exactly correct, but the wrong inferences are being drawn. Yes, the embedded world, especially in industry, is very, very different from the general compute environment. As such, the rules are expected to be, and indeed are, very different.
But complexity defies perfection. As an example, the fact that in the example given, all code is signed is both inappropriate in the world of general computing (because global certificate management is a joke), and insufficient to guarantee 100% security at its own level (because signing can has has been spoofed).
Suggesting that hardware can be 100% side-channel free borders on silly. As I have often mentioned, making execution times data independent is going to be a major overhaul of the architecture, and is going to dramatically harm the performance/cost numbers.
The solution in the datacenter is going to be mostly that people end up renting entire machines by default. The increased cost is going to be dwarfed by the performance cost of properly securing machines against this class of attack.
Like astronomy & "metal" or "low frequency", "room temperature" has a very different meaning for superconductor physics than for general usage. When I complained to my physicist friend about twenty five years ago, he responded, "liquid nitrogen is cheaper than milk."
This code is often in the critical inner loop for an event that the user is actively waiting on. Speed of execution is a really big deal here. Not much time to screw around manipulating the EMEs.
Moreover, on the X86, user software could more-or-less control the entire state of the chip with ease. This is not at all the case any more. Finally, for the last twenty years, chips have been designed with the awareness that EME is a side channel, and they have taken steps to reduce it.
So this particular game is both much harder to play and significantly harder to win these days. <sigh>
This chips dates from the period that I was doing microprocessor validation at AMD and IBM.
First, such a facility has 0 added value to a competent validation team. All of these chips have a special debug (JTAG) interface that allows direct access to the register file and every level of cache (including otherwise hidden state machines) on the part, plus a few other things. This interface is used to test EVERY chip BEFORE they are cut and placed into the dies. It is again used after they are died. While the chip is still in development, this interface is used to load test code (like the code I wrote) directly into the processor.
Moreover, the job of the validation team is to test all supported features of the part. The addition of such a facility creates a HUGE space of processor state transitions that have to themselves be tested.
Yes, exposing the RISC core that is behind every x86 processor since Intel's Pentium (and maybe before) can have tremendous performance benefits for specialized applications. When I was there, I wanted AMD to open up such a facility. I was brushed off by people who knew way, way more than I did about such things. In particular, I did not understand the architecture well enough to instinctively grasp the implications for things like ring-0 verses ring-3. In retrospect, it does not surprise me that the C3 simply bypassed all of that. The facilities would be operating at different levels.
So, yeah. It's probably with excellent cause that I was brushed off.
Which gets to a key point: even when talking to a bunch of professional hackers, the people working on these things aren't applying basic security thinking.
No thanks. No thanks until we go a month without an out-of-bounds bug being reported on El Reg. No thanks until we go a month without an article about how easy it is to spoof AI systems similar to the ones used in these cars. No thanks until we get some kind of demonstration (with a straight face) that the security in these systems does NOT completely break down when physical access is achieved. No thanks to IoT with one-ton objects that routinely travel at 60 mph.
At a societal level, I worry most about a system-wide breach that could have a million of these all turn into oncoming traffic at once. At a personal level, I don't want someone to decide that they are tired of what I have to say, so they bomb me with one.
There is a huge difference between having security features and being secure. Meltdown being a straight-up awful case of security feature bypass.
X86 is an ugly beast of an architecture, but the install base is mind-numbingly large. Don't expect any migration away.
But... this is not an architectural weakness. The architecture makes almost no statements about the time required to execute an operation, and for a good reason: once the architecture is fixed, performance is entirely about changing the time it takes to perform operations.
What is required is a completely new microarchitectural design, where data-dependent execution times are considered as security breaches.
I expect two more years before we see these for consumers.
This is a know effect of Specter. It can be seen in the comments of almost every Specter-related article. Investigation is ongoing, and as of yet, no clear theories as to the cause have gained much support.
I agree that the original advice was unworkable. But the updated advice is false.
There are no good options from the point of view of the government.
WTF, El Reg? We've been over this, and over this. I'm getting tired of having to type it.
1) The documentation for the processors specifically states that they are not approved for use with government information marked CONFIDENTIAL or higher. These chips have NEVER been sold as "unhackable" or even hack-resistant.
2) The documentation specifies (for hundreds of pages), "If the processor state looks like X, and you apply Y as input, the new processor state will look like Z". NOWHERE has this been violated.
This is a security vulnerability. The chips were never designed to provide protection against subtle side channel attacks, a fact which has been known to everyone in the business for literally twenty years before this attack was found.
Yes, you broke the story. Good bit of real reporting work on that part. Now, after seven months, PLEASE get your fact straight!
Actually, yes. These teams would have been composed of senior members brought in on a strict need to know basis. These people don't talk, so unless one was already on the payroll, it would not happen.
As much as we don't trust the TLAs, there are 300 million people living here. Even for tech, there just aren't that many people on the payrolls.
Sorry, Andy, but you appear to be new around here. I spent a decade doing microprocessor validation at IBM & AMD. What I am about to say has been corroborated by others with similar levels of experience:
There is NO software/firmware fix for this class of vulnerability on current hardware without an outrageous performance loss.
What you can do is make it much harder to exploit. You might even be able to eliminate specific exploits. (Like Meltdown.) But you cannot close them all.
The original CERT post was correct. It was also impossible to implement. The update reduced upheaval in the industry. Classic move by government.
I estimate that we have two years before we see consumer hardware that can be secured against this class of attack.
THAT is what inspired that song? Wow. In any event, my company did Vegas for the annual one year. Most stressful week of my life. (Worse than basic training.) Almost all retail these days is predatory, but there my defensive warnings just would not turn off.
Not my town. Not my town at all.
I'm not a big fan of parties, either. But then, I know that I'm high-functioning Aspberger's with the usual stupidly-high IQ. The point is, I'm so far out of the norm, I don't count. Parties work well for most people, however, including a significant majority of those it IT. Just not the folks I feel comfortable hanging out with. (Assuming that I'm actually in the mood to be around other people outside of work.)
Sprinkle magic AI dust, and everything is *NeW*, huh?
How is this AI network functioning as anything other than a hash?
Certainly, such a network is more complicated to reverse than a set of openly-tested conditions. But crypto is crypto, input is input, and a hashing function is a hashing function.
What am I missing?
Sounds like you've never worked an election. I've worked about 30.
1) Election judges are appointed by the county chairmen of the leading party in the district. The alternate is appointed by the other party.
2) A month before the election, the election judges receive their orders to conduct an election in their precincts. Those that cannot are replaced by alternates.
3) Two days before the election, the certified judges pick up their election materials. (Yes, government-issued ids are required.) Many of the judges are known to the county election officials. Ballots are numbered. Ballot boxes are sealed with numbered seals. A list of registered voters in the precinct is provided.
4) The day before the elections, issues with unclaimed materials are resolved.
5) The day of the elections, the judges set up the voting booths. The alternates usually arrive at about the same time. If for some reason, the building is locked, we have set up under trees. (Yes, Texas is not England--rains are not as big of a problem.)
6) Voter proof of ID is a heavily contested issue. One party argues that proof of ID is required to prevent fraud. The other argues that requiring ID suppresses turnout. Having worked these elections, I will testify that fraud is a real concern.
7) Voters sign against their name in the registry and select a ballot. After they vote the ballot, they put it in the box. Spoiled ballots are stored separately (and the voter can choose a replacement ballot.)
8) Parties and candidates can certify and send election observers to any and all precincts. Observer behavior is tightly constrained.
9) If there is a concern with an individual voter, they vote a provisional ballot. In this case, the voter loses their anonymity. Based on my experience, I would guess that the challenge rate is less than 1/1000. Most challenges involve problems with the voter registration process. As mentioned, they are ignored unless the vote is close. In that case, a regular judge handles all of the issues relating to resolving the election.
10) When the elections close, the judges close out the site. They keep a copy of ALL of the elections materials (except the ballots).
So yes, the integrity of the elections judges is a big deal. The integrity of the county elections office is a bigger deal. But the judges and the office balance each other. With paper ballots, it is impossible for an outside agent to corrupt the process wholesale.
Not so with electronic voting of any form.
As a mathematician, get hosed. Formally checked proofs are really, really hard to implement unless you have the training to do proper proofs in the first place, and there just aren't that many of us around.
And given that bad proofs have in fact gotten out from time to time means that even mathematicians are not immune to logical failures, so formally checked proofs ARE the only way to ensure a system is bug-free. If you can somehow prove that what has been proven actually corresponds to what has been implemented.
I spent seven years doing random testing at the microprocessor level. I guarantee one thing about fuzzers--they cannot cover everything.
Google has run out of 10.x.x.x internally by now. When they projected that they were 18 months out, they started the migration. When I was at IBM (early 2000's), we had 9.x.x.x & 10.x.x.x. I don't know if IBM had given 9 back by then or not...
I would say that the evolution of Black Lives Matter pretty well demonstrates that the Ministry of Truth is quite functional in segments of US politics, and has been for years.
Nevermind our twitterer-in-chief.
But do note that G has more-or-less openly talked about influencing elections in third-world countries.
And they were caught suppressing negative search auto-completes of one candidate in 2016, as reported here.
So my paranoia to use SecureRandom.base64() has just been validated? It was a nasty pain to type in 23 characters on my TV...
The first thing that DevOps generally gets used to do is to automate testing. What ARE you going on about?
The more I've been up close & personal with political operations, the more convinced I've become that the entire purpose of voting machines is to create opportunity for fraud.
Paper, fill-in-the-oval ballots, all the way. Use the same scantron you use for scoring tests.
Yeah. 2003. Right about the time we started to see Bush = Hitler all over the place.
As always, if you sufficiently control the language, you constrain thought. 1984 was NOT supposed to be a training manual.
Which is part of why people start wondering about false-flagging.
The average alt-righter wouldn't have the first clue as to what these events are about. And while the tech industry seems to have taken a significant shift to the left in the last ten years, it hardly is enough to raise enough ire to start this sort of thing. Alex Jone's FB suspension may (may) have changed that, but this event was well before that.
And how many events of any sort has the alt-right attempted to disrupt? Event disruption by the left is so common that it's often not reported. And for the last couple of years, the authorities have been increasingly supporting of it.
But again, it's downright weird to say that the alt-right would more-or-less start event disruptions by going after tech events.
I hate to jump in at this point, but your assertion shows a tremendous lack of imagination. False flag operations don't have to become the topic of conversation at every watering hole to be effective. They can be targeted like any other piece of PR. If the goal of an operation is to sway the view of techies, you target techie events.
First, IANAL. But I've read enough law to be confident to answer this one. It's formal language of US law. For instance, "Person" refers to an entity that can engage in contracts and/or be held accountable for crimes. In this case, "protected" means that they are the subject being protected by the statute in question. You have have heard the term "protected class" from time to time. The precise meaning of that term depends on the statute in question.
Nah, it's a pan-galactic numbers station.
I had a guy assert to me that everyone is going to become a programmer. Looks like the G-men want everyone to become a sysadmin.
This advice is the sort of thing to get my paranoid side going. The only honest assessment of IoT "security" is, "Hahahahahahahahaha.....!", or "Ieeeeeeeeeeeeeeeeeeee...!" If consumer protection were the real goal, the FBI ought to be issuing stern warning about the lack of security, and the dangers of using these devices. Almost all are fundamentally unsecurable, which means that that any vaguely competent actor can just pop them at will and get complete access to...
Yeah. My paranoid side really gets triggered at this point.
I think you skipped the bottom of my post. My objection is that you are condescending to one of the most successful companies in the world on the presumed basis that you would have done things better. My point is that their evolution is perfectly reasonable.
Two posts now claiming that database development is NBD. I wonder just how many such things they have worked on personally?
I've not worked on any, but I've been close enough to see some details. Sure, you can teach the basics of last year's best practices in college. But try suggesting building any large business-critical system with green grads and see the response you get.
The business demands for information will continue to grow in an exponential fashion for the foreseeable future. The largest consumers will never be commoditized because no solution scales very well indefinitely due to physical limitations of the implementing systems (such as the speed of light).
You REALLY don't understand just how this all works. When you are a garage company, you rely on other people's work for as much as you possibly can. As you grow, you bring a lot of things in-house, but if you are smart, you differentiate between the expertise that is core to your business from that which is not. This is how the large consulting firms exist. Sure, you could have your own team of accountants or lawyers or programmers, but your business is selling widgets. At least one major automobile manufacturer split off its financial services for this reason.
So back to Amazon. When Amazon started, they were a book seller. Okay, online, but a book seller nonetheless. They needed to focus on getting a web front end up that worked and on delivery of their product. Of course the database was in the middle of all of this, and with their rapid success, they soon hit the point were MySQL (or what ever they were using) was choking. So, do you go out and find a bunch of people that can design a better DB engine, or do you look to buy something? They went with Oracle. (Trading one problem with another.)
But then, a funny thing happened. This online bookseller became a technology company. That change meant that it was now reasonable to talk about building your own database solution. But "reasonable to talk about" verses "have it done" are two very different things. You have to decide that it makes sense to proceed, then you need to hire the talent, then you need to build the thing. Oh, wait. You are almost certainly running one of the top 100 largest datasets in the world. You can't bring in average talent. You can't bring in above average talent. You are going to have to recruit (and/or grow) some of the absolutely best talent in the world to execute this plan, because if it fails, you are d-e-a-d dead.
You yeah. It's obvious that one of the most successful companies in the world doesn't have a clue. Glad that you informed us.
So why didn't Google offer these "secure Andriods" to their SREs? If Google cannot secure the device, I'm calling it unsecurable.
So, which is it.."never used" or only used in "extreme emergencies"? The policy implications of the two use cases are substantially different.
If it is in fact never used, then what matters what it is? Disable it & be done. Heck, this is an exception to the "never hard coded" rule I just mentioned.
If it is in fact held in reserve for "extreme emergencies", then you have a problem: how do you know that one of the 400 by-hand settings was not mis-typed? This is a serious amount of toil you are bragging about. Would you still be proud to set 8000 root passwords by hand? 160000?
We use software because we are stupid & because unnecessary energy expenditure is a bad thing. You can continue to manage 400 root passwords by hand because you consider it to be "secure", but I can all but guarantee that the actual failure rate you experience will be significantly higher than if this were managed by good software.
I'm taking your earnestness on faith. OF COURSE no password is hardcoded in any software created by a competent programmer. (Let alone a software engineer.) It gets passed in when the program is run. And not on the command line because logs.
You have 20+ servers, and you want to change passwords by hand? SW guy here. This is why devops has become a thing.
I spent eight years programming assembly. He're my response: http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html
The fact that a computer necessary branches while executing your code does not mean that your brain is fit to reason about the state of the machine when it does. Demanding that later maintainers tiptoe around the landmines that are created by gotos is at best unneighborly.
Here is a much better one: https://youtu.be/YR5ApYxkU-U?t=2m22s
"Teach right and wrong." According to whose standard? Mine or yours? Mao's?
Government-run schools ALWAYS teach what the powers that be want taught. If you value freedom of thought, then keep your children as far away as possible.
Allowing military supplies to be sources from a potentially hostile actor is just stupid. This has been known for...a while.
First Samuel 13:19-22: "Not a blacksmith could be found in the whole land of Israel, because the Philistines had said, 'Otherwise the Hebrews will make swords or spears!'
So all Israel went down to the Philistines to have their plow points, mattocks, axes and sickles sharpened.
The price was two-thirds of a shekel for sharpening plow points and mattocks, and a third of a shekel for sharpening forks and axes and for repointing goads.
So on the day of the battle not a soldier with Saul and Jonathan had a sword or spear in his hand; only Saul and his son Jonathan had them."
If you consider the US (my government) a potentially hostile actor, then urge yours to act accordingly.
The issuance of .catholic and .kosher make it clear that ICANN isn't the least bit concerned about bias and the like. I can very much believe that they ARE concerned about starting a war that is likely to extend to "highly energetic" packages at their personal doorsteps.
ICANN might be a perfect showcase for almost everything that can go wrong in an organization of its sort, but the demonstrated danger of making a ruling on these two gTLDs makes me suspect that there is actually useful intelligence behind their actions. Especially when you consider that a "delayed" application is a source of ongoing revenue, while a permanently denied one is not. :/
The Power (and PowerPC) chips that I knew had the ability to be non-speculative wrt the count register, but they were in fact speculative. I expect that to change.
I've generally been on the other side of the house, so no. However, something like retpoline only affects performance if you have a return. No return, no retpoline. So structuring your code to have fewer returns would be one way to optimize (today) for performance in the presence of Specter fixes.
This. As I've previously mentioned, I believe that the realization of these attacks means that the market will bifurcate. If you can trust the users to play nice, you don't need these mitigations and existing architectures perform much, much better than what is coming.
But for now, a lot of HPC is probably in environments where there has historically been a lot of trust that might not be warranted. I view this work is a step towards getting this issue in front of the right people.
"We will get there when we get there!"
As I said before, I expect two years or so remain before we see commercial consumer product. Which will be a significant step backward from price/performance/power standpoint.
Biting the hand that feeds IT © 1998–2018