back to article Data-spewing Spectre chip flaws can't be killed by software alone, Google boffins conclude

Google security researchers have analyzed the impact of the data-leaking Spectre vulnerabilities afflicting today's processor cores, and concluded software alone cannot prevent exploitation. The Chocolate Factory brainiacs – Ross Mcilroy, Jaroslav Sevcik, Tobias Tebbi, Ben L. Titzer, Toon Verwaest – show that they can …

  1. Anonymous Coward
    Anonymous Coward

    The royal WEEE ???

    "Our models, our mental models, are wrong; we have been trading security for performance and complexity all along and didn’t know it," the researchers observe.

    Oh really? Who's "we"?

    Well there's a surprise. Not.

    Memory access protection existed for a reason. Also, speculative execution mustn't change globally visible state before it is confirmed that the instruction will actually run to completion, otherwise Bad Things may result. Retaining the necessary attributes throughout the speculative execution process was probably a bit much for some chip architects and designers to cope with, especially if they wanted their chips to look good, fast, and cheap.

    Never mi

    1. Warm Braw

      Re: The royal WEEE ???

      It depends what you mean by globally visible state.

      All these performance enhancements preserve globally visible state in the sense that the state that is intended to be visible (registers, flags, memory contents) are indeed preserved as they should. The things that are not preserved are things that are not actually defined - such as the execution times of certain operations.

      This is a Rumsfeld problem: since CPU architectures are defined in specifically defined in terms of what is visible and known, anything else that is observable - and that potentially conveys information - is an unknown unknown...

    2. big_D Silver badge

      Re: The royal WEEE ???

      The computer industry, specifically the chip manufacturers / designers (AMD, ARM, Intel etc.).

    3. Version 1.0 Silver badge

      Re: The royal WEEE ???

      "Oh really? Who's "we"?"

      It's all of us and it was you too in the days before you seriously looked at security and examine the methods of hijacking a processor. We've been putting performance over security for years - the issue is endemic - sure we're talking about processor cores today, but the same structural failing exists in the Internet, in Politics, in Society, our Military, Brexit, Remain, the EU, etc etc etc etc etc...

      We're living in the B Ark when we spend more money annually on pet grooming than research into carbon-free energy.

    4. Claptrap314 Silver badge

      Re: The royal WEEE ???

      Yes, "we". As in "we who did the work represented by the paper that we are publishing".

    5. Anonymous Coward
      Anonymous Coward

      Cool Down

      "We" as in "computer and software industry".

  2. MonsieurTM

    As noted before, the increase in performance by using a suoer-scalar processor model with multilayered caches has become a flawed model. In modern times the goalposts changed to permit ready access to many computers. That access has been exploited by untrusted code. That untrusted code is what has moved the goal posts on the superscalar model. Intel operated a de-facto monopoly since the late 90s (ref. cases of shareholders vs. Dell). That monopoly highly constrained competition, so alternative, non-super-scalar architectures were not widely developed. So to some extent this issue has been enhanced by Intel's monopolistic practices.

    1. big_D Silver badge

      And that explains that ARM, Sparc and other processor architectures are also affected, how exactly?

      It is an industry wide problem. It is something that dates back to the 90s, when processors weren't used for virtualization and weren't connected to the Internet. The processor designers had taken a line for designing performant multi-threading processors, then the industry decided virtualization was a thing and that connecting to the Internet was a thing.

      Instead of going back to basics (and temporarily crippling the performance of new processor generations), they built out the current architectures (PowerPC, ARM, Sparc, Intel, AMD etc.) to allow these new features, but without ensuring that such side channel attacks could be blocked.

      Intel does have the most problems, as they have Meltdown as well as nearly all Spectre variants, whereas the other chip designers / producers only have certain Spectre variants to deal with, but none of them come up smelling of roses.

      1. Anonymous Coward
        Anonymous Coward

        No

        ELBRUS and EPIC are not affected.

    2. Caver_Dave Silver badge
      Boffin

      Long known about

      I was on a Fhebas run VxWorks course in 2002 where half a day was hijacked by a very detailed discussion of the issues of MMU, caches and attack vectors. (All the people on the course were top mil/aero engineers from around Europe, not your average spanners.) Whilst not exactly the same attack vector as Spectre, the items discussed - and the solutions - were very well known within that community.

      It's one of the reasons why ARINC 653 has had proper Time and Space partitioning since Jan 1997! The processor of choice for most COTS products was PowerPC, not Intel or Sparc for a reason.

      Now with the integration of the old aircraft federated systems (separate boxes doing single jobs) into modern single box solutions with multiple cores and often multiple virtualised OS it all remains very well managed with solutions like VxWorks 653 or VxWorks 7 HVP. Wind River are now supporting this on ARM (with the demise of PowerPC) and Intel (under CAST32). (With Integrity, DEOS, LynxOS your support may vary.)

      With everything (still working) on or around Mars running VxWorks I won't be blaming Spectre for any malfunctions!

      1. Anonymous Coward
        Anonymous Coward

        Yeah, ARINC PIXIE DUST

        According to your logic, we just need Boeing and Airbus to sprinkle some ARINC pixie dust over all these speculative execution CPUs and we are safe ?

        Almost all CPUs have the issue, with the exception of the real VLIWs such as ELBRUS and EPIC. That includes PowerPC.

        1. bazza Silver badge

          Re: Yeah, ARINC PIXIE DUST

          Almost all CPUs have the issue, with the exception of the real VLIWs such as ELBRUS and EPIC. That includes PowerPC.

          That’s not wholly correct. Only some ARMs and PowerPC variants are affected by SPECTRE.

          According to your logic, we just need Boeing and Airbus to sprinkle some ARINC pixie dust over all these speculative execution CPUs and we are safe?

          Well, if you take the pixie dust to be “don’t run software from some untrusted third party” (ie the system has been fully tested and evaluated, an ARINC requirement probably) then the answer is “Yes”. The real problem is the architecture of today’s web and the unwitting execution of random code from wherever. SPECTRE and MELTDOWN are just the latest flaws to hint that it’s a bad idea. They’re also the most spectacular.

          We have been here before, with just common or garden JavaScript engine flaws, etc. There’s no practical difference between flaws of this type that haven’t been disclosed to the developers and SPECTRE; both are unfixed and exploitable. The only difference is that we now know that will always be the case for SPECTRE.

        2. Anonymous Coward
          Anonymous Coward

          Re: Yeah, ARINC PIXIE DUST

          "According to [Caver_Dave?] logic, we just need Boeing and Airbus to sprinkle some ARINC pixie dust over all these speculative execution CPUs and we are safe ?"

          It's not just about ARINC pixie dust, or Airbus/Boeing. The kind of people who were on the fhebas/feabhas [1] course which Caver_Dave mentioned will understand that, even if many "IT" people, and even many volume-market embedded systems people, don't.

          Many of the same factors that are important in properly designed safety critical realtime systems (DO178, DO254, etc) mean such systems are probably not going to be fast enough (or cheap enough) for the "IT" world's datcentre to desktop benchmarketeers, and they'll be argued as being too expensive for the mass market embedded systems beancounters. E.g. performance which is heavily dependent on extensive (and often not well understood) caching and speculative execution and Out of Order executions and superscalar superpipelining and other buffering (and even discarding) of instructions and data and metadata isn't ideal in an environment where performance must be remarkably predictable in advance, and misbehaviour is not welcome.

          Who could possibly have known that predictable timings, decent failure mode and effects analysis (y'know, maybe even "what if"s in general) and so on would eventually be important in the microprocessor market? Well some people saw it coming, at least in their fields of expertise, and researched and analysed and wrote about what they found.

          Interested readers might want to look at stuff from over a decade ago, by the Aerospace Vehicle Systems Institute . Their series of reports (AFE43?) [2] on factors to be considered in choosing and using a microprocessor in safety critical realtime systems makes interesting reading a decade and a half later.

          [1] Google doesn't know how to get to feabhas from Caver_Dave's variant (fhebas?) and hence it took me a while to find the correct spell again. So for the record:

          https://www.feabhas.com/

          [2] This from the executive summary of a 2006 AVSI report on choosing and using microporcessors (and in due course, SoCs) in safety critical systems:

          "Progressively complex microprocessors originally developed for consumer, automotive, and industrial uses are being used in aviation applications. These microprocessor devices reduce the size, weight, and power requirements of a product and add capability by using advanced design and dense component packaging techniques. However, evolving microprocessor architectures include concepts, such as caching and pipelining, which can affect system predictability and safety. This is especially true as more complex microprocessors are being used, more complex hardware is integrated, and fully partitioned systems are being implemented. "

          DOT/FAA/AR-06/34

          Microprocessor Evaluations for Safety-Critical, Real-Time Applications: Authority for Expenditure No. 43 Phase 1 Report

          https://www.faa.gov/aircraft/air_cert/design_approvals/air_software/media/06-34_MicroprocessorEval.pdf

          (and elsewhere)

  3. _LC_
    Devil

    "While browsers have got their act together..."

    This, coming from Google. Why am I not surprised? For the normal user, the 'we execute everything from everywhere' browsers are actually the biggest problem. Nice try, though. :-P

    1. Anonymous Coward
      Anonymous Coward

      Re: "While browsers have got their act together..."

      JS was invented well before Chrome. Seems it is a security risk on most CPUs, though.

      1. bazza Silver badge

        Re: "While browsers have got their act together..."

        True, but it’s Google who have most heavily pushed client side JavaScript as the execution environment of choice for the modern world. A safer alternative, server side execution with client remote viewing, would cost Google a ton of cash to support. Basically it’s our electricity and hardware that runs things like Maps, Docs, etc, not Google’s.

        1. Anonymous Coward
          Anonymous Coward

          "Apps"

          Just run OpenOffice instead of GooOffice. Upload files to your file server using SSH. Don't run macros in your office package. Very secure, no Spectre stuff feasible.

          1. Michael Wojcik Silver badge

            Re: "Apps"

            Very secure

            A phrase that means "I am not a security researcher and do not understand fundamental security concepts such as threat models".

            no Spectre stuff feasible

            Oh, really? You've verified the code in all of the processes that ever run on your computers, so you know that none of them mount Spectre-class attacks? You do so every time any of them is upgraded, or you install a new one?

            Yes, it's unlikely that attackers are subverting the software supply chain specifically to insert Spectre exploits. But unlikely is not the same as infeasible. More importantly, Spectre-class attacks are not the only attack vector for general-purpose computing.

  4. Anonymous Coward
    Anonymous Coward

    Call me paranoid..

    .. but I really want to know why Google, of all companies, is working so hard on finding security problems.

    Just curious.

    1. Anonymous Coward
      Anonymous Coward

      They're going to take over the universe, and the rebellion is being pre-emptively cancelled. Imaginary ulterior motives aside, they may be evil now but they're still a bunch of human beings making things that work and some part of that is making exploits harder, the same bucket of reasons they implement 2FA.

      Pretend that they're taking the responsibility that came with the power of all those brains. Does it feel like you're helping them deceive you? Blog about it and leave a link.

    2. iGNgnorr

      Re: Call me paranoid..

      Why would Google NOT be interested in security? Don't confuse privacy with security. Google need security to a) keep you using their data collection tools (Chrome, Gmail, etc.) and b) to keep your data to themselves.

    3. Jon 37

      Re: Call me paranoid..

      Two reasons:

      1) If everyone gets worried about online security, and stops using the Internet, or even just reduces their use, that would be bad for Google - less people online means less revenue. So Google tries to improve online security. And part of doing that is finding what we're currently doing wrong so it can be fixed. So Google looks for security vulnerabilities and lets the vendor know so they can fix them, then announces the vulnerability publicly so that other people can learn from it.

      2) If Google was hacked, that would be very bad PR for Google, and might lead to people switching to other providers. So investigating the security of the hardware and software that Google uses is good for Google, because it can fix or replace it before it gets hacked.

      A rare case of "doing the right thing because it's good for the bottom line".

      1. Tomato Krill

        Re: Call me paranoid..

        Fewer

        1. quxinot

          Re: Call me paranoid..

          No, he asked to be called 'paranoid', not 'fewer'.

    4. Anonymous Coward
      Anonymous Coward

      Re: Call me paranoid..

      Their victims ("users") must have the feeling that all the Google-collected data is "secure".

      Plus Google actually are experts (as in "having a CS degree"), while some other corporations in this sphere are utter amateurs (with degrees in all sorts of unrelated things).

      1. Claptrap314 Silver badge

        Re: Call me paranoid..

        Meh. On average, Googlers are significantly above average, but you would never have had Stagefright, let alone Stagefright II if they were all "experts".

    5. Michael Wojcik Silver badge

      Re: Call me paranoid..

      Most large software companies have significant research programs, and most of those include IT security research. There's nothing odd about it.

  5. Tom 7

    Hang on a mo

    I would have thought it a trivial (tedious but trivial) problem to model the cpus to exhaustion and be able to highlight any register or memory leakage that could occur.

    They might not like the results they get but a quick look at Electric seems to suggest you could do it with free chip design software so I'd expect a commercial package to be up to it,

    1. _LC_
      Pint

      Re: Hang on a mo

      Huh?

    2. Anonymous Coward
      Anonymous Coward

      that's funny

      "Hang on a mo" is exactly what the CPU "says" to a thread that asks for some bits from the heap.

      IIUC the side-channel we're looking at (or one of them) is "what can we derive from how long that mo lasts" which is why they hoped that reducing timer resolution would help close that channel. And apparently it didn't. If the people who imagined this attack into existence were also on the teams that design the aforementioned "commercial chip design" SW, then maybe that could have helped. But nobody had ever thought of it.

      1. Julz

        Re: that's funny

        People had thought of it and it has been in use for a very long time. Hell, even I wrote code to use a high resolution timer to determine cache efficiencies, work out stall times as well as speculative execution path lengths and code sequences for both failed and executed instructions paths. Admittedly it was for mainframe chips but the principle has been known about and been in use for a while, in testing at least. What happened is that the guys designing the Intel chips had other priorities. They also might have just forgotten about it. We always used to say that each iteration of the mainframe chip set reintroduced the problems of the last but one design as the people designing the new chips didn't understand the design decisions of the previous set of engineers.

        1. Anonymous Coward
          Anonymous Coward

          Re: that's funny

          I mean "nobody had thought of it becoming a side channel." Of course I wasn't there and I can't know what everybody was thinking. But we all seem pretty surprised and upset about something, now.

      2. Peter Gathercole Silver badge

        Re: that's funny

        I'm a little uncertain about how timing memory access actually leaks data.

        It is a very strong indicator about whether the data value you've just read was pre-fetched, in one of the caches, or retrieved from memory. This information may be valuable in deciding whether the value you've just read could have been pre-fetched from a different context, and thus whether it could be data from another process, but that's about all it does.

        Pretty much all of the variants of Spectre that I've read about are to do with data from another address space being in either renamed registers from a branch not taken, or in some other cached data structure. The timing of the read will tell you this, but not the value itself.

        The flaws exist because the data in the caches may have been placed there while the processor was running at a higher privilege state, and possibly should not be visible after the processor has left that state. With Meltdown, this allowed a process to read data from it's own address space which should have been protected, and which, unfortunately was mapped to part of the kernel address space. Mapping the kernel memory into the address space of a user process was a bad idea, whatever protections you set, and should have been seen as such from the get-go (incidentally, PDP11, VAX and s/370 versions of UNIX never did this, and IBM learned this with AIX on Power line sometime around AIX 4.1 in the mid 1990's).

        Although some of the Spectre variants suggest it may be possible to read data from another process address space, or from a system call, most of them appear to be ways of reading protected data from their own address space.

        The attack documented in the article, of one thread reading data from another thread's memory should not surprise anyone. Remember that when threads, or lightweight processes were introduced, the whole concept was to allow more than one processor ro work in a single processes address space. That is what it was designed for! (For reference, with early multiprocessor systems, each process could only consume up to one processors worth of CPU resource, never more, even if the other processors were idle). Allowing the system to schedule more than one processor per process, using threads as the scheduleable entity lifted this restriction.

        But it was implicit that each of the threads was running in a single address space, so in theory had access to each of the other threads memory (which meant that several contention issues with shared data structures had to be worked around).

        Running client-side executable code from a server you don't control in a thread, without further protection, was insane in the first place, and now that those protections are seen to be flawed, client side code execution must be banned, or at least relegated to a separate process address space, damn the performance consequences.

        The basic rule should be that if it is not code you control, you should run it as at least a separate process or even in a lower security ring (for suitably equipped processors). The enforcement of process address spaces by the MMU is well understood, and the separation of all caches and other cached structures should be invalidated across different process contexts.

        1. Anonymous Coward
          Anonymous Coward

          uncertain

          Me too. So here's the paper, I am reading it now (finally): https://spectreattack.com/spectre.pdf Seems to be a way of knowing whether some data was in a cache line and so whether the cache line contains the speculatively executed results someone wants. It looks like the timing itself isn't "a side channel" at all, it just contributes to some but not all method(s) of reading from the real side channel... the cache behaviour. That explains why screwing around with timer resolution isn't enough. So, forget that much of what I said yesterday. Also they point out that an HTML5 Web Worker that simply decrements a number in shared memory with another thread provides another good-enough timer for attacking from JavaScript. Neat.

          1. Michael Wojcik Silver badge

            Re: uncertain

            No, timing is a side channel. It leaks information; that's the definition of a side channel. The attacker finds gadgets which will probe the inaccessible address space for a range of values, and then uses the timing as signal to find out which value was correct.

            Spectre-class attacks are actually quite straightforward, by the standards of this field.

        2. Anonymous Coward
          Anonymous Coward

          Re: that's funny

          My understanding and it may b wrong is that onky the Meltdown flaw lets code examine memory which the process concerned should have access to. Thsi is because in what seems a really brain dead optimisation the access rights to memory are not checked during speculative execution and this means a speculative memory indirection via memory without read access affects cache contents. This is a n Intel specific flaw.

          The other problems as far as I am aware of allow software within a process to read memory within that processes address space defeating software sandboxes which attempt to prevent this. OK. This is a concern if you rely on soch software sandboxes but from my perspectiuve it has always been clear that this is a major risk and bad practice to do so.

          To be clear any hardware design which allows software through some subtle indirect mechanism to read memory indirectly within the virtual address space for that process but not outsid ethat process I consider to have no bug. I have never seen any promise implicit or otherwise about a processor design that this was a reasonable expectation. Processors provide a protection mechanism to isolate processes, if you choose not to use that mechanism then I don't believe you have any valid complaint if it turns out that things you believed isolated are not.

          1. Anonymous Coward
            Anonymous Coward

            You remind me of THIS GUY I KNOW. Now Google knows who I'm talking about because they read all our emails from 2011-2018. ;)

    3. John Smith 19 Gold badge
      WTF?

      "I would have thought it a trivial (tedious but trivial) problem to model the cpus to exhaustion"

      You really don't have any idea of the complexity inside a modern CPU do you?

      And you certainly don't know how the simulation complexity grows with every step you move from the initial state of that complexity.

      Or how many initial states a modern CPU can have (including the transients and deliberately faulted)

      Otherwise you wouldn't have made such a dumb statement.

  6. John Smith 19 Gold badge
    Unhappy

    "but the basic problem is that chip designers traded security for speed. "

    Correct.

    IOW the MMU should do what it was f**king designed to do properly.

    Keep the executing tasks separate.

    I'd say it looks like people used tricks developed to snoop data on smart card level embedded processors on data center size chips and found they worked pretty well.

    1. ChrisPVille

      Re: "but the basic problem is that chip designers traded security for speed. "

      I don't think Spectre is really the fault of the MMU. It's more that the caches are leaking program state via the side-effects of speculated instructions and other processes can infer that state with careful timing analysis.

      The article hints at the hardware solution to this problem, which could be something like tagging cache entries with a process ID to ensure they won't be accessible from another process. It might also be possible to hold pending cache entries from speculated loads separate and not commit them until the instruction graduates. Either way, it's not a simple software fix. The processors in question all violate their own programmer's model, that is speculation will have no visible side-effects.

      You're right that Spectre does look a lot like the timing and information leak attacks used against embedded devices, and while Spectre targets most speculating CPUs, I hope people who use caches in general are now carefully considering what could go wrong from a security perspective.

      1. Anonymous Coward
        Anonymous Coward

        Back in the Real World

        ...everybody was salivating about benchmark results, benchmark results and more benchmark results. That was how CPUs were (and often ARE) being compared.

        One of several cases of Clockmania, a desease of the "developed" world.

    2. Anonymous Coward
      Anonymous Coward

      Re: "but the basic problem is that chip designers traded security for speed. "

      "IOW the MMU should do what it was f**king designed to do properly."

      As far as I am aware, Meltdown and Spectre isn't an MMU issue. The MMU is correctly separating process address spaces and marking access permissions in hardware.

      The problem, at least on x86, is that the TLB in the CPU doesn't enforce security as rigorously as it should (i.e. it is copying state rather than enforcing the existing state at a hardware level) and there is a delay between memory protection faults and the processor pipeline stopping executing instructions allowing a memory protection fault to occur but not immediately stop execution.

      I had thought that better TLB/cache security to ensure MMU page table security is set at a hardware level would be sufficient to address Spectre (and I believe some CPU architectures may already have this in-place, primarily as methods of improving vitualisation performance). The short answer is it's not sufficient as the speculative execution attacks will remain until a CPU fault stops and flushes the speculative execution pipeline.

      In terms of whether Spectre is a result of "cheating" to gain performance at the expense of security, I would disagree. CPU performance has always tried to carefully balance performance with features and security and tradeoffs have always been made. The timing attacks required for Spectre were considered theoretical for almost 20 years (I'm aware of them being considered for the DEC Alpha chips) before Meltdown (which was "cheating" security for performance) provided demonstrable attacks and further analysis showed how to make the processors perform in a way that significantly increased the chances for attacks to succeed. The fact that Spectre potentially affects every high-performance CPU architecture indicates the level of risk designers across the industry believed existed.

  7. Down in the weeds
    Boffin

    Harvard Architecture, Anybody?

    Completely separate(d) Instruction memory and Data memory. Some (very careful) admittance to reading I memory for fixed constant values. Multiple independent D memory 'banks', each with explicit different interface connections for both address and data connections per-bank. Multiple internal independent D caches, one per bank. instruction set support for hardware-based copy-on-write (write multiple D banks from a single I thread). Just some thoughts ...

    1. Anonymous Coward
      Anonymous Coward

      Re: Harvard Architecture, Anybody?

      Just use Transputers - one for each process.

    2. Anonymous Coward
      Anonymous Coward

      Re: Harvard Architecture, Anybody?

      "Completely separate(d) Instruction memory and Data memory." This limits scaleability as suggested ("sorry, you need to install more instruction or data memory to run this application) and arguably already exists via the MMU giving hardware level protection outside of the CPU.

      "Some (very careful) admittance to reading I memory for fixed constant values." You mean as done in Power/x86/SPARC architectures? The separate instruction and data caches in modern processors would appear to achieve this.

      "Multiple independent D memory 'banks', each with explicit different interface connections for both address and data connections per-bank." So you would limit a general purpose CPU's performance to the number of these D memory banks that could be physically supported? Based on memory buses taking around 10% of CPU area and cache taking 20-30% on high performance processors, this design would likely max out at two D memory banks or a lot of much slower (narrower) memory channels with less cache, both options resulting in lower performance (i.e. think early ARM7 level performance with single channel memory and small caches - I'm 90% sure dual memory channels only cam with ARM8).

      "instruction set support for hardware-based copy-on-write (write multiple D banks from a single I thread). Just some thoughts ..." I'm not sure what you want to achieve from this, so it's hard to relate to current architectures. Is it cache synchronization on writes? If so, again there is significant work already done to ensure cache state remains synchronized in a multi-core environment.

      None of these appear to address the Spectre flaws around speculative execution and most of them significantly limit performance/scalability.

  8. Anonymous Coward
    Anonymous Coward

    Paranoia about the NSA.....

    Quote: "We now believe that speculative vulnerabilities on today’s hardware defeat all language-enforced confidentiality with no known comprehensive software mitigations..."

    Lots of folk believe that the NSA has weakened public encryption standards. Maybe they have a hand in chip design as well. Just saying!

    1. _LC_

      Re: Paranoia about the NSA.....

      Intel ME runs an entire system that can access everything WITHOUT YOUR control. The NSA (and others) have everything they ordered with Intel & co.

      I'm afraid that Spectre comes due to simple cheating. They made their processors faster. All they had to do, was remove a little safety here and there. ;-)

      1. Michael Wojcik Silver badge

        Re: Paranoia about the NSA.....

        All they had to do, was remove a little safety here and there

        This is an essentially meaningless gloss.

        The side channels exploited by Spectre-class attacks were not created by "remov[ing] a little safety". They were created by introducing complexity in the form of new mechanisms to improve performance, such as caches and TLBs. Speculative execution is simply a mechanism for recovering information leaked by those side channels, and it's not the only one; it's interesting primarily because it can be used by pure-software attacks, unlike e.g. EM emissions.

        Performance-enhancing complexity will nearly always involve discarding information at some point, and you cannot delete information without creating a side channel. (This fact, in another domain, is what gave us the famous "hairy black hole" problem.)

        People are desperate to see greed and foolishness behind Spectre, but it's largely a matter of physics and market forces.

    2. Anonymous Coward
      Anonymous Coward

      Re: Paranoia about the NSA.....

      No need for that. Clockmania explains all of the Spectre issues.

  9. fnusnu

    So which chips are now secure against spectre / meltdown?

    1. _LC_

      In-order architectures (without speculative execution), such as most embedded devices.

    2. Anonymous Coward
      Anonymous Coward

      Potato chips?

    3. Anonymous Coward
      Anonymous Coward

      Apparently RISC-V is not susceptible to Spectre.... because the first implementation was a non-speculative design.

      1. Michael Wojcik Silver badge

        Non-spec-ex RISC-V implementations are (by definition) not susceptible to Spectre-class attacks. If someone builds a spec-ex RISC-V CPU, it will be susceptible. There's nothing magic about RISC-V.

        Also, while non-spec-ex architectures are, again by definition, not susceptible to Spectre-class attacks, that doesn't mean they aren't susceptible to side-channel attacks in general. This is particularly a problem for portable devices, which have been shown many times to be vulnerable to e.g. EM-emission side-channel attacks. Those can easily be mounted using relatively inexpensive hardware concealed in typical venues - coffeeshops, conference rooms, etc.

    4. Anonymous Coward
      Anonymous Coward

      ELBRUS, EPIC and other real VLIW systems.

      1. Michael Wojcik Silver badge

        VLIW CPUs also have the advantage of steadily becoming more resistant to most other attacks, as they vanish from the market. Eventually they'll be entirely absent and thus entirely secure.

        Capability architectures offer all sorts of security benefits too. The disadvantage of not being able to actually get a capability CPU outweighs them. VLIW isn't quite there yet, but "switch to VLIW" is not particularly useful advice.

        And when it comes to non-spec-ex architectures, I can't see any good reason to pick VLIW. No one's shown a VLIW architecture that performs particularly well in practice, while there are e.g. non-spec-ex ARM CPUs with good price/power/performance metrics.

  10. Claptrap314 Silver badge

    Is this news?

    I, and a few others with similar levels of knowledge, have been painstakingly attempting to explain this until exhaustion from day one. Our communal moniker aside, El Reg's community contains an amazing brain trust.

    To recap:

    - Rule #1 of business is that the customer is king.

    - General consumers don't understand security, and do not care unless they personally are inconvenienced.

    - - General consumers actively punish companies that provide security at the cost of their convenience.

    - - General consumers actively punish companies that provide more expensive solutions with no apparent benefit.

    The outcome of the above is that anyone selling into the general consumer market is either going to be like Intel (selling vulnerable product) or Blackberry (driven out of the market).

    From a technical standpoint, data leaking through the cache response times is core to the existence of a cache on the part. THIS DOES NOT DEPEND ON SPECULATIVE EXECUTION. Speculative execution permits rapid reading out of the data, but even without it, if I have access to a wall clock, I can tell if my data has been ejected from the cache or not. This is a data leak.

    This leakage, however, is not subject to attacker control. Various strategies by defensive applications or the OS can prevent an attacker from deriving usable information this way.

    Speculative execution, in and of itself, does not affect the situation. Speculative execution that bypasses memory protections, however, very much does.

    So, what was the situation in the nineties? Speculative execution with memory protection bypass provided consumers with a substantial speed improvement. Yes, we all knew that there was a theoretic risk of exploit. We tried (not me personally, the industry) AND FAILED to realize that exploit. So the designs were shipped. And for more than twenty years, there was no publicly known exploit.

    While I have strongly condemned Intel's response to the discovery, there is simply no honest way to condemn them for the decision that they made in the nineties to ship this design.

    I will also point out that I have been aggressively throwing shade on these software "fixes" since they have been coming out. Memory protection bypassing in the hardware is not something software can fix. I said this a year ago. Its truth is obvious to anyone that has played around at that level.

    Again, the potential fixes are as follows:

    1) Turn off all caching.

    2) Turn off all speculative fetching.

    3) Replicate ALL caching at ALL levels so that cache ejections due to speculative fetching are recovered. (I have become more pessimistic about this over time for various reasons--but it roughly doubles cache sizes & adds a lot of logic. It also is not clear that this would defend against an indexed load being speculatively fetched from an address space controlled by the attacker--and I do not believe that gadgets of this sort are avoidable.)

    4) Enforce memory protection during speculative execution.

    5) Ban untrusted code.

    Anyone who as done significant work designing or validating microprocessors understands just how bad options 1-4 are from a performance/watt standpoint. Which is why I've been talking about 5 for the last few months.

    Dedicated machines running only trusted applications are can safely ignore Spectre-class attacks. This will give them a HUGE performance/watt bonus over Spectre-secure machines. The market is going to bifurcate over this--and we should rejoice, because once it does, there is a chance, however small, that x86 will finally get the boot from the consumer space.

    1. Anonymous Coward
      Anonymous Coward

      Re: Is this news?

      "data leaking through the cache response times is core to the existence of a cache on the part. "

      I may have misread something, but when I was involved in low level hardware and software stuff, some architectures and implementations and tools (not including x86 and related) used to have the concept of "non-cacheable" memory (edit:noncacheable address regions, maybe), and some used to have the concept that hi-res timers were *system* state (not process state) and were therefore not necessarily usefully accessible to non-privileged code.

      Put another way:

      If something secret is also noncached and noncacheable, all the way from hardware to application code (including support in compilers and similar tools) ie the access time is constant, does that change the picture?

      And/or

      If non-privileged code cannot see a system-wide hi-res timer but only see its own process's timers,

      does that change the picture?

      Both of those are real features really available for multiple years on non-x86 processors, even before Intel discovered SGX (and more recently, the world discovered SGX was fecked).

      There may be others with similar benefits.

      Where does AMD64 stand on this kind of thing?

      1. Claptrap314 Silver badge

        Re: Is this news?

        All mmio space is necessarily non-cachable. I'm pretty sure all mmus support it. But the performance cost of turning off the cache (item 1 in my post) is simply too much to consider.

    2. Anonymous Coward
      Anonymous Coward

      "we all knew that there was a theoretic risk of exploit. We tried (not me personally, the industry) AND FAILED to realize that exploit."

      OK, kinda neatly negates what I was saying, but I have to ask... can you point me at some old related reading material that didn't only get dramatically reinterpreted in the last year?

      1. Claptrap314 Silver badge

        No. But I vaguely recall an article around 1997. Side channel attacks have been around & studied long before that, so when Intel announced the feature, it got quite a bit of attention from the academics.

    3. veti Silver badge

      Re: Is this news?

      I'm sure you're right about "banning untrusted code" being a far more effective and efficient solution than any other mitigation strategy.

      But I don't think you've thought through how it will work.

      As I see it, there are two ways it could pan out:

      1. Users demand the right to nominate code as "trusted" on their own recognisance, and manufacturers, grumbling, allow them to. The moment this rolls out to the mass market, we're right back to square one.

      OR

      2. Users demand said right, and manufacturers, still grumbling, deny them. Then new market entrants appear who will either allow them, or remove the restriction entirely, and their chips perform vastly better in all manner of performance benchmarks. Hello again, square one.

      It's important to recognise that the "users" in this scenario (which I am 100% certain would play out, one way or the other) are not necessarily being unreasonable. Many users just don't care that much about security, no matter that you may think they should, or would if they were as smart as you. They just want to get things done. That's why cloud shit is so popular, despite the dangers.

      The only time gatekeeping is an effective solution is if you're confident the gate you're minding is the only way into the garden. If there are other ways in, or if other ways in can be created, then it won't work.

      1. Claptrap314 Silver badge

        Re: Is this news?

        Ahh, but this is the US. With the US legal system.

        Corporate legal beagles are going to go ape about continuing to release consumer products with these vulnerabilities. Corporate legal beagles at the cloud providers triply so.

        As for the marketing problem, Spectre is a great name, don't you think?

        I am curious about the breakdown of the market by segment. What percentage is data center? Business? Personal? Gamer?

        I think that the general personal market is the only one that will blow this off--and I suspect that it's too small/margins too thin, to support a separate architecture.

  11. Charles 9

    IOW because it's a lot easier to BS around a wrong answer than a missed deadline, in a strict choice between doing it fast and doing it right, fast wins every time.

    1. Anonymous Coward
      Anonymous Coward

      No

      You just cannot use most computers to run untrusted code. JavaScript and the "Cloud" are insecure.

      IT industry BS should be questioned.

      Own brain should be used.

      1. Charles 9

        Re: No

        But what if you CAN'T TRUST your own brain? Plus, who establishes the necessary level of trust, especially if you're coming from a field where you have NO expertise?

        Put it this way. You have to place trust SOMEWHERE or you'll get nowhere. Some people can't trust their own governments, yet they MUST in order to function.

        1. Anonymous Coward
          Anonymous Coward

          IF?

          What do you mean, "if"? I already can't trust my own brain, and I do it anyway, because stuff sometimes gets done well enough. My brain is an asshole-- he did something shitty using my body before I, before my SELF woke up from that coma and took over and resumed gathering long-term memories.

          I know what you mean-- long ago I said that if you didn't create the system you're using, you implicitly trust the ones who did. So we do. We use this hardware and software and global network and social media giant, and we trust the ones who made them useful, or we can refuse, stop trusting. And I have! it's the reason I'm slowly ditching GMail. We can keep going, maybe even put up some walls, maybe even go Amish if it came to that. It sucks. But there are things that decent people keep making and I intend to keep using, too. So I signed up at sourcehut.org who don't rely on any JavaScript, how do you like that? We ain't goin' Amish just yet ;) Lead by example, or follow someone who's going where you would if you were.

          1. Charles 9

            Re: IF?

            How can you lead by example when all the roads that matter to you lead to the likes of Facebook (just try to consistently talk to someone in say Southeast Asia without using Messenger--everything else, including the mail and cell phones, are totally unreliable. And yes, family matters to people of southeast Asian descent, so I can't just say no (they won't take no for an answer).

            1. Anonymous Coward
              Anonymous Coward

              "when all the roads that matter to you lead to the likes of Facebook"

              then you stop and honestly examine what matters to you and why... if it's a real problem (and I definitely would call leading a life that firmly requires & permanently integrates Foobcake to be a Real Problem), then think about how to solve it. Someone else did exactly that recently, and I don't have high hopes for their proposed solution to the whole surveillance capitalism clusterf***-- reviving Gopherspace-- because of course society would just turn around and "improve" that until it "met their needs", inevitably poisoning that instead. It solves nothing, prevents nothing, only invites people to try to work in knee-deep mud, and only temporarily.

              Okay, Facebook is reliable, well there are lots of networks that are reliable and not trusted, and that's already been solved-- you have VPNs and secure tunnels and OTR and so on. So, new problem: you can't reasonably expect everyone in your personal network to start using something like OTR even if it could be integrated in the chat client (which you have to decide to trust anyway, and you already do, tsk tsk). You'll say, FB just works for people who can't even begin to think about their own privacy, or they just CBA, they're fine with whatever. Well, that will help while you re-evaluate how important it is to communicate with them. Apparently they don't care about YOUR privacy, either.

              "Beware of 'the real world'. A speaker's appeal to it is always an invitation not to challenge his tacit assumptions." --EWD

              1. Anonymous Coward
                Anonymous Coward

                For the ones that might care about privacy, you get to try to teach them-- yeah it's going to be hard and still worth doing. I just don't, because nobody seems to care, and nobody yet cares that I'm not on their friends list and if they start wanting me on their friends list then I will start being VERY UNLIKEABLE. Presumably FB already slurped SMSs from me out of their phones, anyway. So if I care this much, I should have thought of that before, you know, KNOWING ANYONE EVER. Ironically, I know they aren't slurping me out of my family's phones (since 2016) because nobody in my family even knows my number, because of reasons.

              2. Charles 9

                "Apparently they don't care about YOUR privacy, either."

                The applicable phrase is, "They don't care about privacy." The way they live reminds you of the medieval village with open windows, nosy neighbors, and basically no real expectation of privacy. Hiding something immediately turns all the snoops onto you, and they're VERY good at sniffing secrets out.

                So, basically, I'm stuck with it. I can do it the easy way or the hard way. There's no third option, no in between, and like thermodynamics, I can't leave the game, either. They won't take no for an answer.

                1. Anonymous Coward
                  Anonymous Coward

                  Of my mother's 7 kids, the 3 oldest (incl. me) have basically no contact with her or each other or the bulk of any family, mostly because of the absurdity of calling it a family. 3 years ago when I still had any contact with them, only the 2 youngest still had any contact at all with their father (the last of 3 husbands).

                  I know that makes it too easy for me to say things like I did and I can't so easily expect someone else to consider mine a valid approach. I still say you have to push back or else it is guaranteed that-- barring some Internet-wide upheaval-- nothing ever changes, and now you're among the billion accomplices.

                  It's funny that you reply today because just last night I watched a film that came out right after the WWW even became a thing, which goes a long way toward explaining why it focussed on TV/radio/newspapers. The major points stand.

                  1. Charles 9

                    "I know that makes it too easy for me to say things like I did and I can't so easily expect someone else to consider mine a valid approach."

                    Yes, it makes it easy to say something like that. You only have seven people to deal with with. I have an entire clan which last I checked ran in the neighborhood of around 50 able-bodied people, some of which ARE in tech sector and know this stuff inside and out AND how ubiquitous it is there (to the point it's in DUMB phones over there).

                    Let's just say, a certain king named Canute springs to mind.

                    1. Anonymous Coward
                      Anonymous Coward

                      I don't know what it's like but I will pretend that I can imagine. It may be like trying or even wanting to teach people that infinitives are not verbs and "How to do this thing?" is not a sentence. ::sprays brains all over ceiling::

            2. Anonymous Coward
              Anonymous Coward

              Via Hacker News: Chat over IMAP ...interesting, at least. Of course email is even easier to exploit than SMS but there is still OpenPGP (be nice if Newegg and eBay would start using it). Crap, I don't either. Well, time to start.

    2. Michael Wojcik Silver badge

      in a strict choice between doing it fast and doing it right, fast wins every time

      What "choice between doing it fast and doing it right" do you believe is responsible for Spectre-class attacks?

      The CPUs vulnerable to Spectre-class attacks are "doing it right". They are meeting the guarantees they made about user-visible state. (You might quibble about whether Meltdown violated some guarantee, but that's just one Spectre variant, and not the most common one.)

      We've known about side-channel attacks for decades - in fact, since before we had electronic computers. We've known about the information-theoretic consequences of irreversible computing since, again, before we had general-purpose electronic computers. When speculative execution was introduced (originally by CDC and IBM, before Intel even existed), people knew it would leak information - that just wasn't relevant to the requirements.

      The original Spectre paper was something of a watershed because it showed how easy it was to recover useful information from some of those side channels using only software; it was one of those facepalm moments we have periodically in IT security, like the Morris worm or Levy's "Smashing the Stack" or the original Bleichenbacher attack, where everyone says "oh yes, it's obvious in retrospect that these attacks are feasible".

      There even seems to be a sense among some researchers that there are research areas like this which people subconsciously avoid, because we have a lurking sense of dread about how much trouble they might be. So, for example, before "Smashing the Stack" you'd hear security researchers saying "well, yes, the Morris worm overwrote a buffer in fingerd, but that trick is tough to pull off" and dismissing it as a unicorn; but then Levy published his Phrack piece and suddenly everyone was doing it. The can was open and the worms were everywhere.

      I don't think it's all that common that someone comes into work and says "this technique will be widely used, and it's clearly broken, but what the hell, fuck the users!".

  12. EnviableOne
    Holmes

    Its all Intel's fault

    there obsession with keeping up with Moores Law ahead of everything else, forced them to spec ex which gave such a speed boost, others wer forced to follow.

    1. Anonymous Coward
      Anonymous Coward

      Re: Its all Intel's fault

      That is not fully clear. VLIWs have a very good performance potential w/o hardware speculative execution.

    2. Michael Wojcik Silver badge

      Re: Its all Intel's fault

      This has the degree of factual accuracy that the punctuation, diction, and orthography would suggest.

  13. _LC_
    Alert

    As a programmer

    As a programmer I feel the urge to oppose this “there is no alterative” non-sense. When it comes to number crunching on the big irons, speculative execution doesn’t get you much.

    When it comes to the normal user and your everyday experience, better code could easily speed up the performance by a factor of ten and more.

    In the recent years we have seen software becoming slower and slower. It is almost as if chip makers ordered their minions to apply handbrakes everywhere.

    The new Gnome has introduced JavaScript on the desktop. It’s everywhere now. The new Gnome was so slow that they received tons of complains. This, running on 3-4 GHz CPUs with 4-16 cores, is sheer insanity! KDE has its python crap everywhere.

    Wherever you look, programmers are trying to slow down your everyday experience. Just look at Android. Their phones now have eight cores and four GB of RAM or more. Some people say that you shouldn’t buy a phone with less than two GB, because “multitasking” isn’t going to work well, otherwise.

    Those phones are yesterday’s workstations. They are running a system that is so inefficient that it hogs gigabytes of memory and needs gigahertz to display the system interface without excessive delays. It’s a joke.

    We don’t need super-fast processors for typical usages. Better programming can speed up your experience more than a liquid hydrogen cooled super processor could. As a conclusion, we don’t need bogus processors. We could do very well without them.

    1. Charles 9

      Re: As a programmer

      "As a programmer I feel the urge to oppose this “there is no alterative” non-sense."

      Thing is, one of the first thing I learned in Computer Science was Alan Turing's Halting Problem disproof (as well as other non-circular variants) and the more-general conundrum of Decidability. Basically, it's been proven that there are some things that a computer as we know it cannot solve, and I've already run into various real-life dilemmas, such as the First Contact problem (trying to establish trust between two users who've never met before and have nothing in common), the Outside the Envelope attack (trying to secure content that must be openly accessible to be usable), and the Efficient Security dilemma (trying to prevent electrical information leakage in an environment with very little power). Each seems to run into inherent and intractable issues that, frankly, I can't see any kind of usable and reliable solution (preventing a mole, preventing trust hijacking, and being unable to use constant power usage as a mask).

      PS. Are you sure it's a matter of efficiency and not a matter of they just have a lot more to do that you don't realize? After all, people ask computers to do more and more all the time, and it's hard for the builders to say no (because then someone else comes along and steals their customers). Not even governments can do much about it since they can either move or change the government.

  14. Anonymous Coward
    Anonymous Coward

    Pandora's box and now it is open nothing can be done

    Intel were first to put performance over security, this gave them a decades long lead in CPU performance but looking back one can only conclude that they were aware of the implications but ignored them whilst the money kept rolling in.

    Now most of this industry understand or atleast are aware of the security flaws in running code before assessing security rights, you would imagine that things would be getting better but the fact is some of the industry have vested interests in continuing to use faulty logic and trying to pretend the problem does not exist.

    Example: the brand new, just released, RPi4 has a vulnerable A72, which clearly went into production since ARM stated that none of their spectre effected designs were yet in silcon.

    What exactly does this say? simply that whilst performance is allowed to overide security then this issue continues to be a ticking time bomb, those that pretend the bomb is never going to go off are living in dreamland but what is most upsetting is just how much FUD is being posted by seeming authorities who still manage to be ignorant or more likely just do not care.

    There is indeed a problem and no matter how much you stand there with your fingers in your ears mumbling "I will not listen" then it is not going to go away.

    Deal with it, running code without first determining if it is safe was always bad idea, intel's "protection" of people not knowing about it is now gone so just stop doing it and look elsewhere to obtain the same performance boost.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like