back to article Ready for another fright? Spectre flaws in today's computer chips can be exploited to hide, run stealthy malware

Spectre – the security vulnerabilities in modern CPUs' speculative execution engines that can be exploited to steal sensitive data – just won't quietly die in the IT world. Its unwelcome persistence isn't merely a consequence of the long lead time required to implement mitigations in chip architecture; it's also sustained by …

  1. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    "Also I got rid of most if not all of my vulnerable systems shortly after the SPECTRE news broke"

    And with ones did you replace them?

    But given what you else wrote, it looks you ddidn't understand at all what people are talking about.

  3. _LC_
    Boffin

    How would it go away?

    “…just won't quietly die in the IT world”

    The hardware is bogus. Not only do you get to keep your expensive junk, which they sold you under false pretense (MMU, multiuser system capabilities, VM addons, etc.), but you can acquire new expensive hardware that is just as faulty as the one you got. Thereby, the faults stick around - surprise!

  4. ElReg!comments!Pierre

    Too many cores

    This happens because we have idle cores sitting around doing nothing. If we made faster cores instead of just throwing more of them at workloads that can't use them, we wouldn't need speculative executions and thus, no spectre. I wonder if IBM would <ant to revive the Power phylosophy.

    1. Charles 9

      Re: Too many cores

      Do you recall Intel's NetBurst architecture? The issues of heat and all that that forced it to backtrack and adopt the Core line instead?

      1. bombastic bob Silver badge
        Boffin

        Re: Too many cores

        not just heat, but the laws of physics, distance being one of them. The physical distance of the wiring between one part of the CPU and another part, or between the CPU+socket and the associated bus [like memory], limits how fast it can possibly go. At ~3Ghz, that distance is (for all practical purposes) less than 1 inch. Keep in mind you need time to send a signal out and get something back, so you double the distance, then factor in settling and response logic times and whatnot and there ya go. If you're lucky you might get away with a longer distance. But the wavelength of a 3Ghz signal is about 10cm. At that distance, an entire clock cycle will have passed before a signal gets from the start to the end of the wire. So the best practical signal length is about 1/4 of that, accounting for logic time on each end, plus some settling time for a pulsed signal. That applies to anything running at 3Ghz. And higher frequencies, of course, are even SHORTER.

        The current solution: have a wider bus, more cores, and more levels of cache. Make the cores able to predict branches and hyper-thread and super-scale and do other things to limit "logic time". Otherwise, Mr. Physics makes things impossible.

        Heat also being a factor if you reduce distance too much to allow for higher speeds, since with less silicon to transfer this heat to a heat sink of any kind, you could end up with 'hotter localized hot spots' which create entropy and allow "other bad things" to happen, eventually damaging the CPU and rendering it useless... yeah Mr. Physics again.

        Then if you reduce voltage even more, you run into the limits of silicon-based [or germanium, or anything else for that matter] materials to act like logic gates, and switching logic levels become less tolerant and settling times may be longer and currents might have to be THAT much higher [rendering the drop in voltage less effective on overall power consumption].

        And "idle cores" are more likely the fault of programmers not writing multi-core algorithms, Windows background processes notwithstanding [they're "scampering" instead of "running", i.e. unproductive motion, as far as I'm concerned, so I'd rather have idle cpu cores instead of "doing that"].

    2. Anonymous Coward
      Anonymous Coward

      Re: Too many cores

      Actually, one of the problem is cores are much faster than RAM, and memory can't throw at them enough ops and data quickly enough. Hence the needs of caches - if you increase the core speeds without increasing the memory speed, you'll still need caches and speculative execution to try to avoid cache misses which would simply be even more expensive.

      Even single core CPU used speculative execution to keep the CPU busy and avoid idle cycles while data were transferred to/from RAM.

      That said, CPU speed has plateaued because it's no technically very difficult to just increase the speed - and there are workloads that do benefit from multiple CPU and cores, especially on servers, but not only.

    3. YARR

      Re: Too many cores

      If we made faster cores instead of just throwing more of them at workloads that can't use them, we wouldn't need speculative executions and thus, no spectre

      The reason for speculative execution is to make a single core execute code faster. Faster meaning more IPS rather than raw clock speed. It's a trade off with diminishing returns, of throwing an order of magnitude more hardware / CPU die area / power consumption to run a single program thread with a linear speedup. The alternative would be to have many more slower non-speculative cores, or a mixture of the two.

      Manufacturers may fix known Meltdown / Spectre / L1TF variants in their next generation CPUs, but speculative execution in general requires shortcuts which could expose them to as-yet-undiscovered issues. They could be forever fixing new speculative execution issues with each generation, which is an argument for including a non-speculative core in every CPU with hardware memory encryption that can be used to run critical secure code.

      1. Anonymous Coward
        Anonymous Coward

        Re: Too many cores

        The problems being that (1) what if the critical secure code must ALSO be fast (like a high-throughput encryption engine), or (2) that there comes a way to glean stuff side-channel from the secure end via the non-secure end

        1. Michael Wojcik Silver badge

          Re: Too many cores

          there comes a way to glean stuff side-channel from the secure end via the non-secure end

          Yes. What's the secure channel from the "non-secure" core to the "secure" core? What prevents an attacker from grabbing it from the non-secure core before it's handed off to the "secure" one? Or does sensitive data have to arrive at the "secure" core from "secure" storage over a "secure" channel, so in effect you have an entire second "secure" general-purpose computer (including persistent storage, transient storage, processing, and interfaces to the outside world) alongside your "non-secure" one?

          We haven't even had much success getting people to use TPMs and smartcards, so good luck with that.

  5. UKHobo

    significant news?

    Any chance of a new update from Intel and AMD about their microarchitecture hardware fixes?

    1. ThatOne Silver badge
      Devil

      Re: significant news?

      > update from Intel and AMD about their microarchitecture hardware fixes

      If they make the error to release a new Spectre-proof CPU, all older CPUs will instantly become landfill, for who will want to buy them now? They would need to sell them way below cost.

      So I guess the order is to not do anything about it. It's cheaper anyway, and most important it does prevent loss of investment and existing stock.

      1. Anonymous Coward
        Anonymous Coward

        "I guess the order is to not do anything about it"

        They have to do something, but for the reason you stated won't tell you much in advance. The changes may be not simple, though.

        1. Sir Runcible Spoon

          Re: "I guess the order is to not do anything about it"

          As always it will also be a trade-off.

          Most people won't be bothered enough to trade in for a slower, but more secure, CPU.

        2. bazza Silver badge

          Re: "I guess the order is to not do anything about it"

          They have to do something, but for the reason you stated won't tell you much in advance. The changes may be not simple, though.

          The changes will be far from simple I fear. Making the microarchitecture accurately implement the published temporal behaviour of the machine architecture has got to be difficult.

          I like AMD's current trick, supporting memory encryption for processes / VMs in the CPU limits the ability for code to see other processes's / VMs' data. If that were to become universally adopted in OSes and Hypervisors, I can't help but think that we'd be better off than we are today.

          Also I think we should get back to the days when we don't run random code unknowingly downloaded from the Internet (Javascript...). That's unpopular I suspect. These days telling computer users and devs to practise safe hex is a bit like trying to persuade a room full of swingers to knock it off (er, you know what I mean).

  6. iron Silver badge

    Requires the machine to be compromised first and is exceedingly difficult to exploit. Almost entirely theoretical imo.

    1. _LC_
      Thumb Down

      Huh?

      "Requires the machine to be compromised first..."

      No, they don't. JavaScript will do fine, WebAssembly even better.

      Besides, this also affects hosters. They often run a multitude of installations, separated virtual environments, on one machine... Universities, schools, ...

      1. bombastic bob Silver badge
        Devil

        Re: Huh?

        affects hosters - yes a shared host cloud server would be most vulnerable. the problem in this case is that multiple customers share the same CPU. And so that meets one condition, that the code runs on the same CPU. It may even be the same core of a multi-core system that's being shared by a particular VM. And so on.

        As I recall, one of the biggest problems with Spectre is the theoretical ability to pass through the host/VM boundary.

  7. MacroRodent

    I wonder if ghosts

    ... really are speculatively executed life?

  8. Anonymous Coward
    Anonymous Coward

    The only secure PC ever

    has no power.

    1. tekHedd

      I believe the complete recipe is

      A secure PC is both unpowered and in a locked closet.

      1. Anonymous Coward
        Anonymous Coward

        Re: I believe the complete recipe is

        Even then I don't know. I wouldn't put it past someone to be able to read an unpowered storage device with some kind of special radio or microwave device, even if the enclosure is metal.

  9. John Smith 19 Gold badge
    Unhappy

    AFAIK Amdahl's law is still in effect.

    The speed up of a task is proportional to how much of it can be speeded up.

    Which is why IRL most tasks crap out at about 10 cores.

    An interesting option would be to build much simpler cores as very primitive "Cellular Automaton" cores. It's been known since the early 70's you could map any instruction set onto a large enough grid of automata connected to their nearest NEWS neighbours and using the previous states as one of the inputs IOW a 32 state look up table.

    1. Time Waster

      Re: AFAIK Amdahl's law is still in effect.

      Please tell me this cellular architecture you’re describing uses Befunge as its instruction set!

      1. Michael Wojcik Silver badge

        Re: AFAIK Amdahl's law is still in effect.

        Don't be ridiculous. You program it in Conway's Life.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like