and?
How many time do people need to rediscover that when there's a serious issue cooperation works best for everybody.
Next time you leave things to the last minute, remember this well. Despite having known about the Meltdown and Spectre security vulnerabilities for roughly six months, Intel and other chip giants still hadn't warned the US government's cybersecurity nerve-center by the time The Register blew the lid off the design flaws. …
Co-operation is good - usually, and when dealing with similarly-minded co-operative peopl.
But...Not telling the US Government about known flaws is surely a sensible security precaution?
If they told the government, then within hours they'd be exploiting it themselves, for who knows what nefarious purposes! So definitely a way to protect their customers until a fix has been sorted.
Sometimes co-operation is not a sensible approach.
> "But...Not telling the US Government about known flaws is surely a sensible security precaution?"
That's debatable, and it stands in contrast to this FTA:
"An audience member introduced himself as a Lenovo staffer who was briefed of Meltdown and Spectre ahead of the planned disclosure date, and he denied the Chinese government had been made aware of the issue in advance. By his reckoning, only a couple of dozen people at Lenovo knew about the issue, and all were based in the US, apart from one developer in Japan."
So Lenovo did know, and yet we are to assume they too withheld that information from the Chinese government? I don't know. If I had to bet, I'd go the opposite way.
If they told the government, then within hours they'd be exploiting it themselves, for who knows what nefarious purposes!
Speaking from ignorance here. But I'd've thought that, as with any big organisation, there's both good and bad. Not everyone in the US government would have a clue what you were talking about, let alone exploit it.
I'm sure there's someone they could've told who would just have filed it.
Both. Going "Full John Titor" here, but someone *somewhere* had to know in order to find the vuln in the first place.
I can actually trace my own experience back to first getting a dual core AMD CPU and noticing that some games mysteriously stopped working and crashed the entire system, needing a "Dual Core Optimizer" to "fix" it. Except it didn't and in many cases had to run games on a single core to get them working at all.
You got new hardware, and some videogames crashed?
That's not a conspiracy, that's Tuesday.
And if it is a conspiracy it's the one where Intel rigged their compilers to produce inefficient code when asked to run on non-intel CPUs...
https://www.theregister.co.uk/2005/07/12/amd_vs_intel_code/
...and nothing to do with Spectre, SMERSH or the Decepticons.
You can expect Intel to optimise its compilers' output for the idiosyncrasies of its own CPU architectures, but not to make sure the code runs equally well on a competitors product.
Oh? Why not ... whenever achieved is it the product overtaken by Intel in a Leading Monopoly Position with Accesses to Future Codes for Immaculate Source Perfecting Presentations. And/But Once you Know that, have you also automatically an ACTive Share Stake in Leading AIMonopoly Position Programs Populating Virtualised Project Areas/Remote Space Places/Live Operational Virtual Environments.
And if you are having difficulties with Current to Future Information Processing, Use the Advantage of the Experience when IT is Successfully Simply Taken for Granted ..... with Source Intelligence always Approved to Be Proved True and Correct, for Everything Already is All Ready and a'Waiting Current Exposure in Multi Media Mogul Presentations.
I've been wondering recently if the people who complain about "clickbait headlines", do so because they go for the bait.
It's generally pretty obvious just from reading the link that something is just clickbait, so I just don't click on it. It seems pretty simple to me but some people get so het up about it, is that because they're embarrassed about being caught out?
"Had they known, CERT could have advised people that patches were available but instead initially recommended those affected should replace their processors."
Sorry, but is that serious/genuine advice? Even if CERT hadn't known it's not a trivial matter to just replace processors in Christ knows how many millions of machines, especially when some of said machines may require a specific processor for the tasks in hand anyway.
We're supposed to think these people are competent and when they come out with total bullshit advice like that, well, it's no wonder people aren't always taking them seriously is it?
I couldn't find some wiper blades for my car in Halfords at the weekend so maybe I'll replace my car later tonight. Piss off.
Sorry, Andy, but you appear to be new around here. I spent a decade doing microprocessor validation at IBM & AMD. What I am about to say has been corroborated by others with similar levels of experience:
There is NO software/firmware fix for this class of vulnerability on current hardware without an outrageous performance loss.
What you can do is make it much harder to exploit. You might even be able to eliminate specific exploits. (Like Meltdown.) But you cannot close them all.
The original CERT post was correct. It was also impossible to implement. The update reduced upheaval in the industry. Classic move by government.
I estimate that we have two years before we see consumer hardware that can be secured against this class of attack.
Well, if the flaw is firmly baked into the hardware, the speculative execution microprocessor, then the only way to remove the flaw is to remove the processor and replace it - or replace the machine that contains the processor. This obviously is inconvenient but it would be the only way to stop the flaw properly. Or run a really, really good anti-virus - but that's not a 100% answer.
It's like if your equipment will all stop working at all at the end of, oh, the year 2000 - in that case, you simply have to plan to scrap it then, or, before then. And sue the supplier, of course.
The alternative was a lot of work.
Actually there were discussions on certain channels about physically scrapping the chips if the bug(s) couldn't be fixed in a similar way to Intel fixing FDIV.
Because it actually affects real life hardware and potentially the same hardware is in life-or-death situations such as hospitals (cough Therac-25 /cough) it would make sense to inform any NGO/GO's in this position first even putting it down as a "maintenance upgrade" or "preventative maintenance".
Actually, yes. These teams would have been composed of senior members brought in on a strict need to know basis. These people don't talk, so unless one was already on the payroll, it would not happen.
As much as we don't trust the TLAs, there are 300 million people living here. Even for tech, there just aren't that many people on the payrolls.
... initially recommended those affected should replace their processors.
This is the kind of shit I hate, change them to what exactly? In many (most?) cases that means changing a shit load more than the processor because it's not like one can just drop in a D525 Atom or a Sparc processor into their server/laptop/phone.
... "We suspect this is part due to the wide rollout of mitigations, and part due to there being better bugs for hackers to abuse."
I suspect it's because there is no reliable tool for detecting cache side-channel attacks, so the attacks are flying below the radar. The techniques I've seen so far require calibration to avoid false positives, and it looks to me as though attackers could defeat those methods by reducing or disguising their activity to below the magic threshold that the detector is set to...
It'll be interesting to see how the first detection in th wild happens and how many false positives they had to discount. :)
WTF, El Reg? We've been over this, and over this. I'm getting tired of having to type it.
1) The documentation for the processors specifically states that they are not approved for use with government information marked CONFIDENTIAL or higher. These chips have NEVER been sold as "unhackable" or even hack-resistant.
2) The documentation specifies (for hundreds of pages), "If the processor state looks like X, and you apply Y as input, the new processor state will look like Z". NOWHERE has this been violated.
This is a security vulnerability. The chips were never designed to provide protection against subtle side channel attacks, a fact which has been known to everyone in the business for literally twenty years before this attack was found.
Yes, you broke the story. Good bit of real reporting work on that part. Now, after seven months, PLEASE get your fact straight!
"These chips have NEVER been sold as "unhackable" or even hack-resistant."
OTOH Intel does promote the "security" features of their chips very loudly and have done for quite a few years now. I don't think it's a facade either, Intel really have applied themselves to creating a highly sophisticated (ie: complex) hairball of security features.
IMO they need to rethink their approach to securing their platforms, work hard to cut the fat, simplify, make it *easy* to comprehensively validate that the systems are actually secure. I say this because it's clear that validating a contemporary x86 chip is *hard* if not impossible. In my view Intel (and AMD) have got a lot of bright folks working for them - many of whom I believe would favor prioritizing validation over checkbox features.
I *hope* Spectre & Meltdown do prod vendors to improve their products, but the x86 gene pool is very shallow - there is too little competition and too much inertia to overcome.
There is a huge difference between having security features and being secure. Meltdown being a straight-up awful case of security feature bypass.
X86 is an ugly beast of an architecture, but the install base is mind-numbingly large. Don't expect any migration away.
But... this is not an architectural weakness. The architecture makes almost no statements about the time required to execute an operation, and for a good reason: once the architecture is fixed, performance is entirely about changing the time it takes to perform operations.
What is required is a completely new microarchitectural design, where data-dependent execution times are considered as security breaches.
I expect two more years before we see these for consumers.
I am looking beyond the specifics of SPECTRE/MELTDOWN, they are just a few vulns out of many that have popped up on the Intel platform over decades.
The trend is that networks are increasingly hostile - even 20 years ago there was the infamous Ping of Death that would take out an unpatched machine pretty much as soon as you dialed into your ISP... The environment has become more hostile, the threats more varied, the attacks more frequent, and the pace of change is accelerating rapidly (in my eyes), folks are having to patch their software on a daily if not hourly basis already to keep the lights on.
Patching hardware is somewhat more problematic and inherently more costly at present, attack windows are in the order of years, and the remediation costs can easily top 10^8 USD for mid-sized deployments... I reckon the time between critical hardware vulns being exploited in the wild is going to continue to reduce - to the point where customers face a stark choice between their systems being thoroughly owned or bankrupting themselves patching & replacing hardware.
Adding more features simply inherently increases the size of attack surface, and thus the frequency of customer bankrupting exploits. My argument is that the pressures applied by exploits will work against complex behemoths like x86-64 and favour simpler designs.
It'll be interesting to see how it unfolds, happy for your mileage to vary. :)
I had a prof in grad school who succeeded in getting his tenure based upon an incremental improvement in using a technology that was probably 50 years old by the time I got to his classroom. And he still had to tell us the story about how he invented this - Reg, you are going to need a new scoop soon, because this is starting to be your old guy "telling grand kids about the time you"....story.
Heh, good analogy.
But intriguingly chips *based* on Intel/AMD (cough Eastern Europe /cough) might also have ironically better security because they emulated function rather than carbon copying the chip.
Some of these are actually better in several ways than the original silicon but worse in others (eg power dissipation) because optimizing one parameter generally affects another.
I would actually say that as a result of this, good honest competition is excellent because it causes companies to innovate while just because we found a bug in Company A's product does not mean that Company B,C or D are not affected indirectly.
A recent example, the bug found in a certain very popular router actually *increased* sales because though it scared people off the brand initially, it encouraged them to upgrade.
I actually did eventually replace it after waiting in vain for a fix, alas it was not to be. RIP DG834GV5