"Also I got rid of most if not all of my vulnerable systems shortly after the SPECTRE news broke"
And with ones did you replace them?
But given what you else wrote, it looks you ddidn't understand at all what people are talking about.
Spectre – the security vulnerabilities in modern CPUs' speculative execution engines that can be exploited to steal sensitive data – just won't quietly die in the IT world. Its unwelcome persistence isn't merely a consequence of the long lead time required to implement mitigations in chip architecture; it's also sustained by …
This post has been deleted by its author
“…just won't quietly die in the IT world”
The hardware is bogus. Not only do you get to keep your expensive junk, which they sold you under false pretense (MMU, multiuser system capabilities, VM addons, etc.), but you can acquire new expensive hardware that is just as faulty as the one you got. Thereby, the faults stick around - surprise!
This happens because we have idle cores sitting around doing nothing. If we made faster cores instead of just throwing more of them at workloads that can't use them, we wouldn't need speculative executions and thus, no spectre. I wonder if IBM would <ant to revive the Power phylosophy.
not just heat, but the laws of physics, distance being one of them. The physical distance of the wiring between one part of the CPU and another part, or between the CPU+socket and the associated bus [like memory], limits how fast it can possibly go. At ~3Ghz, that distance is (for all practical purposes) less than 1 inch. Keep in mind you need time to send a signal out and get something back, so you double the distance, then factor in settling and response logic times and whatnot and there ya go. If you're lucky you might get away with a longer distance. But the wavelength of a 3Ghz signal is about 10cm. At that distance, an entire clock cycle will have passed before a signal gets from the start to the end of the wire. So the best practical signal length is about 1/4 of that, accounting for logic time on each end, plus some settling time for a pulsed signal. That applies to anything running at 3Ghz. And higher frequencies, of course, are even SHORTER.
The current solution: have a wider bus, more cores, and more levels of cache. Make the cores able to predict branches and hyper-thread and super-scale and do other things to limit "logic time". Otherwise, Mr. Physics makes things impossible.
Heat also being a factor if you reduce distance too much to allow for higher speeds, since with less silicon to transfer this heat to a heat sink of any kind, you could end up with 'hotter localized hot spots' which create entropy and allow "other bad things" to happen, eventually damaging the CPU and rendering it useless... yeah Mr. Physics again.
Then if you reduce voltage even more, you run into the limits of silicon-based [or germanium, or anything else for that matter] materials to act like logic gates, and switching logic levels become less tolerant and settling times may be longer and currents might have to be THAT much higher [rendering the drop in voltage less effective on overall power consumption].
And "idle cores" are more likely the fault of programmers not writing multi-core algorithms, Windows background processes notwithstanding [they're "scampering" instead of "running", i.e. unproductive motion, as far as I'm concerned, so I'd rather have idle cpu cores instead of "doing that"].
Actually, one of the problem is cores are much faster than RAM, and memory can't throw at them enough ops and data quickly enough. Hence the needs of caches - if you increase the core speeds without increasing the memory speed, you'll still need caches and speculative execution to try to avoid cache misses which would simply be even more expensive.
Even single core CPU used speculative execution to keep the CPU busy and avoid idle cycles while data were transferred to/from RAM.
That said, CPU speed has plateaued because it's no technically very difficult to just increase the speed - and there are workloads that do benefit from multiple CPU and cores, especially on servers, but not only.
If we made faster cores instead of just throwing more of them at workloads that can't use them, we wouldn't need speculative executions and thus, no spectre
The reason for speculative execution is to make a single core execute code faster. Faster meaning more IPS rather than raw clock speed. It's a trade off with diminishing returns, of throwing an order of magnitude more hardware / CPU die area / power consumption to run a single program thread with a linear speedup. The alternative would be to have many more slower non-speculative cores, or a mixture of the two.
Manufacturers may fix known Meltdown / Spectre / L1TF variants in their next generation CPUs, but speculative execution in general requires shortcuts which could expose them to as-yet-undiscovered issues. They could be forever fixing new speculative execution issues with each generation, which is an argument for including a non-speculative core in every CPU with hardware memory encryption that can be used to run critical secure code.
there comes a way to glean stuff side-channel from the secure end via the non-secure end
Yes. What's the secure channel from the "non-secure" core to the "secure" core? What prevents an attacker from grabbing it from the non-secure core before it's handed off to the "secure" one? Or does sensitive data have to arrive at the "secure" core from "secure" storage over a "secure" channel, so in effect you have an entire second "secure" general-purpose computer (including persistent storage, transient storage, processing, and interfaces to the outside world) alongside your "non-secure" one?
We haven't even had much success getting people to use TPMs and smartcards, so good luck with that.
> update from Intel and AMD about their microarchitecture hardware fixes
If they make the error to release a new Spectre-proof CPU, all older CPUs will instantly become landfill, for who will want to buy them now? They would need to sell them way below cost.
So I guess the order is to not do anything about it. It's cheaper anyway, and most important it does prevent loss of investment and existing stock.
They have to do something, but for the reason you stated won't tell you much in advance. The changes may be not simple, though.
The changes will be far from simple I fear. Making the microarchitecture accurately implement the published temporal behaviour of the machine architecture has got to be difficult.
I like AMD's current trick, supporting memory encryption for processes / VMs in the CPU limits the ability for code to see other processes's / VMs' data. If that were to become universally adopted in OSes and Hypervisors, I can't help but think that we'd be better off than we are today.
Also I think we should get back to the days when we don't run random code unknowingly downloaded from the Internet (Javascript...). That's unpopular I suspect. These days telling computer users and devs to practise safe hex is a bit like trying to persuade a room full of swingers to knock it off (er, you know what I mean).
affects hosters - yes a shared host cloud server would be most vulnerable. the problem in this case is that multiple customers share the same CPU. And so that meets one condition, that the code runs on the same CPU. It may even be the same core of a multi-core system that's being shared by a particular VM. And so on.
As I recall, one of the biggest problems with Spectre is the theoretical ability to pass through the host/VM boundary.
The speed up of a task is proportional to how much of it can be speeded up.
Which is why IRL most tasks crap out at about 10 cores.
An interesting option would be to build much simpler cores as very primitive "Cellular Automaton" cores. It's been known since the early 70's you could map any instruction set onto a large enough grid of automata connected to their nearest NEWS neighbours and using the previous states as one of the inputs IOW a 32 state look up table.