Can you clarify?
Can you clarify what you mean by all out-of-order execution Intel processors?
I havent heard that terminology before. Are we talking i3/i5/i7 processors? Or just older processors?
In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance microprocessors to make use of instruction cycles that would otherwise be wasted by a certain type of costly delay.
In 1990s, out-of-order execution became more common, and was featured in the IBM/Motorola PowerPC 601 (1993), Fujitsu/HAL SPARC64 (1995), Intel Pentium Pro (1995), MIPS R10000 (1996), HP PA-8000 (1996), AMD K5 (1996) and DEC Alpha 21264 (1998). Notable exceptions to this trend include the Sun UltraSPARC, HP/Intel Itanium, Transmeta Crusoe, Intel Atom until Silvermont Architecture, and the IBM POWER6.
The Intel 'Core' architecture (i3's, i5's, i7's etc) are basically a derivation of the Pentium Pro that, as per the referenced wikipedia page, introduced out-of-order execution in 1995.
> Notable exceptions to this trend include the Sun UltraSPARC
Dynamic branch prediction, instruction prefetch+decode and speculative execution were first introduced in the UltraSPARC-IIi.
These have been grouped into two logo'd and branded vulnerabilities: Meltdown (Variants 1 and 2), and Spectre (Variant 3).
Other way around, based on the preceding CVE list, it should be "Spectre (Variants 1 and 2), and Meltdown (Variant 3)."
Can't use the corrections link when I don't have an email client installed...
Also grouping two variants under one name allows Intel PR to work their magic and claim others are affected by the same thing too.
Well some AMD CPUs are affected in a non-standard kernel.configuration but the fix for that variant doesn't slow down kernel system calls as much.
If the extraction rate is a function of RAM capacity, then there must be a benefit in Increasing RAM, just like bit lengths are increased to improve resistance to brute force in security functions.
Cloud vendors and virtualisation providers stack machines high with RAM to get better consolidation ratios, so does it follow they are better protected ?
Given that large amounts of RAM are used to cram in many virtual machines, I'd say they're not "better protected", in fact quite the opposite. You'd have a single physical attack surface containing many machines which can be compromised, which in turn represent many more virtual attack vectors. It might take you longer to dump the physical hosts entire memory, but you'd get access to many more VMs for your increased effort. Also consider that one VM owned by one customer could potentially dump out memory of another customers machine that just happens to be running on the same physical host.
Think you missed the point I was trying to make. The volume of data is higher, therefore it will take more time to get anything useful out, hence slowing down the attack. Sifting the useful bits from the non-useful bits takes more time again and who's to say that the couple of bytes you got from VM1 and couple from VM27 are any good without the rest that has not been recovered yet.
I accept that it doesn't fix the problem, but it would buy a lot of time.
And just before Christmas, who sold most of their stock in Intel? Intel's CEO.
It was noted in another thread that executives have to give months of notice before trading their own shares, so this is probably innocent. On the other hand, the article indicates that the bug was reported last summer. I don't know how much notice is actually required, but it is possible that there are legitimate questions to answer.
However, whilst the impact of this bug is obvious to me, it may not be obvious to a CEO. If I went to *my* boss and said there is a flaw in almost every product we've produced in the last 20 years which is financially quantifiable (at least for cloud users, the impact of this bug *can* be measured in dollars) and is by design so we can be sued to pieces ... he might not believe me.
That usually depends on who you are - what position you hold in the company, and of course, how much pointy-haired the boss is.
Anyway, usually bosses may listen when they hard words like "shares downfall" - "legal issues" - "recall and replacements", etc. etc. - even when they can't understand the technical details.
Don't get too complacent.
From reading some comments and posts both here on El Reg and elsewhere it seems to me as if a blunderbus approach to fixing these snafus is being contemplated.
Even though AMD have said that their CPUs are only affected minimally from what I have read all CPUs will be targeted by the patches whether they need it or not. So that AMD and ARM will be slowed down as well as Intel stuff.
Now it may well be that I have got hold of the wrong end of the stick, and I hope I have, but if true then a lot of collateral damage will done and we will all suffer from this mess.
That's weird, I was under the distinct impression of having read about AMD submitting a patch explicitly to _prevent_ the "fix" activating on its processors. Granted, there's a bit too much confusion going around on what does what / affects precisely what / implies precisely what at the moment.
Your both asking the questions I'm interested in !!!
from what was in the article it seemed as if the researcher's were going out of their way to make it work on AMD and even when they could prove it possible it wasn't easy.
*Disclaimer I am a bit of a AMD fanboi, not so much that I don't imagine AMD are not affected by this just hoping not.
"Your both asking the questions I'm interested in !!!"
The disabling of PTI (and associated performance impact) does not happen on AMD CPUs, in the Linux kernel fixes at least (can't speak to other affected OSes).
"from what was in the article it seemed as if the researcher's were going out of their way to make it work on AMD"
Rather the opposite, at least so far as Google's team is concerned: they state in their post "Our research was relatively Haswell-centric so far. It would be interesting to see details e.g. on how the branch prediction of other modern processors works and how well it can be attacked." They did test their PoC exploits against AMD CPUs, and state how badly they are affected by each one, but they appear to have focused on Haswell's design in actually *developing* the attacks.
Seems like we're sleepwalking to the greatest clusterfuck in tech history. Smart devices everywhere but no actual Smarts. Is there something in the water / air lowering IQ? Speaking of air, mines the PC getting air-gapped.
"A mega-gaffe by the semiconductor industry. As they souped up their CPUs to race them against each other, they left behind one thing: security."
It's the curse of the presentation layer people.
If it looks shiny, ship it. No matter whether it's fit for purpose, no matter whether it's got serious design flaws, which will inevitably come back to bite the purchasers and users in the backside, just ship it. And if anyone dares question the dominance of shiny over well-engineered, the heretics are defined as "not a team player".
Been that way for at least a couple of decades in quite a few "leading tech companies" and industry sectors. Companies and people that cared about decent engineering have largely vanished from the business.
Shiny sells, Marketing and Finance don't care about what might happen a few years down the line as much as the bottom line today. Shareholders don't care about the product as long as they get their dividend. Management don't care about customers other than as a source of income. Customer Support is seen as a necessary evil that gets the bare minimum of funding to put a layer of separation between the people making decisions and the customers who enjoy the "benefits" of those decisions.
This is obviously an exaggerated description and not representative of many companies in the Real World but it does, unfortunately, seem to bear an annoying resemblance to some of them, from IT suppliers to retail businesses, vehicle manufacturers and holiday companies...
Perhaps Facebook and smartphones are lowering IQ?
Also, "natural selection has not stopped": "genetic contributions to intelligence and educational achievement are currently disfavoured by natural selection. In evolutionary terms, it seems, humans are now brainy enough" (https://www.economist.com/news/science-and-technology/21732803-it-does-however-no-longer-seem-favour-braininess-data-half-million)
But it doesn't matter, because Artificial Intelligence will save us!
Biting the hand that feeds IT © 1998–2019