back to article Here come the lawyers! Intel slapped with three Meltdown bug lawsuits

Just days after The Register revealed a serious security hole in its CPU designs, Intel is the target of three different class-action lawsuits in America. Complaints filed in US district courts in San Francisco, CA [PDF], Eugene, OR [PDF], and Indianapolis, IN [PDF] accuse the chip kingpin of, among other things, deceptive …

          1. bombastic bob Silver badge
            Devil

            Re: Data breeches

            (voice of Samuel L. Jackson) "Honey? WHERE is my CYBER SUIT?"

      1. Loud Speaker

        Re: Data breeches

        Are they the ones with pockets big enough for full height 5 1/4" hard drives?

        1. Paul Crawford Silver badge

          Re: Data breeches

          No, those are hard drives. He is just pleased to see you.

  1. tim292stro

    Interesting about RISC-V, I'm on the mailing list for that, and I'm pretty sure with my vague cursory eavesdropping that RISC-V would at least be susceptible to Specter - though they are actively brainstorming on how to eliminate that possibility at the metal layer. The thread is in the ISA-DEV list, with the subject "Discussion: Safeguards on speculative execution?" For those who want to play along at home.

    Even the RISC-V ISA guys are still contemplating the entirety of the vulnerabilities, with many "solutions" put forward for simpler work-arounds only leading to new attack vectors - so it makes me wonder how Intel's PR people can stand out there and say Intel CPUs are effectively immune to Meltdown and Specter after this simple patch (which many companies and most end users will never upgrade to - and that so far doesn't include any microcode changes, and no one seems to offer a good reason why that would even work). Answer is probably simple: "because they are paid to".

    Lesson, know who is paid to lie to your face for profit, then look elsewhere for answers. ;-) Major kudos to El Reg for the un-spun distillation of the Intel Press Releases article.

    1. LaeMing Silver badge

      I'm wondering if, in this day and age of multiprocessing chips, you shouldn't just have an entire core dedicated to running the OS with its own exclusive on-chip memory for OS code and data, while the user-space companion cores completely lack even the transistors for privileged execution. Save on all the privilege-level-managing logic and all??

    2. Michael Wojcik Silver badge

      RISC-V would at least be susceptible to Specter

      Yup. What the RISC-V folks are saying is that "no current RISC-V designs are vulnerable", not that it's impossible to design a RISC-V CPU which is.

      It's hard to prevent Spectre in hardware. Either you give up speculative execution, or you identify all the feasible side channels. Good luck with the latter, and I'm not yet convinced that there aren't Spectre-like attacks which don't need spec-ex.

      Where you see a side channel, start by assuming it can be used to leak information to an adversary, until proven otherwise. (Where "proven otherwise" usually means demonstrating that the maximum bandwidth of the channel, after accounting for errors, is too low to be useful.)

  2. teknopaul Bronze badge

    timing attacks

    Cant you just reduce timer accuracy for untrusted code and get all your performance back? Not good for cloud use but is that no OK for the rest of us?

    Or would that penalise the cloud so everyone making megabucks from it is trying to avoid mentioning this fact.

    1. tim292stro

      Re: timing attacks

      Reducing timer accuracy will mean you can only fire events on more coarse intervals, which would necessitate a slow down (missing an event at a near time slot would mean you now wait much longer for the next) - and let's be frank, while the "impact" may not be felt on the local client machine's CPU, most of us home bodies are touching something on a VM or a database in a datacenter over a network, so IMHO we will actually feel it at the screen. Even if just subtly. I believe yes, the datacenter people are going to spend most of their time talking up how there is no security impact, and stating that the performance slow-down is "moderate but your mileage may vary" (about as non-binding as they can get).

      1. Flocke Kroes Silver badge

        Re: timing attacks

        Who needs to fire off events at precise _times_? The usual events are "required data is in memory" or "disk has confirmed that the data will be read back as required even if the power fails right now". Delete the high resolution timer, and the vast majority of software would not even notice.

        Back when I was a PFY, the scheduler interrupt was 50Hz - if you hogged the (only!) CPU for 40ms the OS would give something else a turn. Even back then, if the current process stalled, the scheduler would pick a different unstalled process immediately. Later, Intel CPU's got caches huge enough to hold multiple copies of the enormous state required by the X86 architecture, so the tick could be moved to 1000Hz without continuously thrashing the cache. (Linux got tickless for battery life).

        Databases need to put requests into an order, and I always assumed they used a sequence number for that rather than the time. Make has difficulty with FAT's 2 second (!) resolution last modified time stamps. I am sure uuid and NTP actually need nanosecond accuracy, but apart for a few oddities the only contexts I have actually seen using nanosecond accuracy are performance monitoring for optimisation and malware cache timing attacks.

        Most software does not touch the high resolution timers at all, so I too am interested in why restricting access to them is not a solution.

        1. Voland's right hand Silver badge

          Re: timing attacks

          Who needs to fire off events at precise _times_?

          A lot of people funnily enough. You would be surprised how much timer activity is involved in inter-process communications and messaging.

          Most software does not touch the high resolution timers at all,

          As someone who has written a significant portion of the code for the high-res timers in one of the architectures in the kernel I can tell you that you are talking out of your arse.

          The moment you go into the land of multi-threading and AIO libc will start using them even if your app is not. On average a nearly empty embedded linux system which has nothing besides a busybox instance with a shell will have 4-8 high res timers active at all times. The moment you load an average desktop environment and open a web browser you are looking at hundreds. F.e. my desktop which is doing bugger all at the moment is showing 228 active ones.

          1. Flocke Kroes Silver badge

            Re: Vorland's right hand

            Thank you.

          2. Claptrap314 Bronze badge

            Re: timing attacks

            High-resolution clocks to user space have been a known source of side channel attacks for a long time. (decade???) Moreover, nanoseconds are synthetic unless your distances are measured in single-digit inches. If your code needs this stuff, it is either very specialized or wrong. If your desktop is running that many, you probably are running some garbage code. AFAIK, user space has been limited to 1ms because of this.

            Amazingly, perhaps, it turns out that even a 1ms timer is probably sensitive enough to dig this stuff out--you just do the thing many times & watch the averages.

            Getting this right will be HARD.

          3. teknopaul Bronze badge

            Re: timing attacks

            I'm impressed as usual by reg commentards knowledge. One follow up from comments below, how many of these hires timers are in _untrusted_ code, (outside cloud and vm context) e.g JavaScript in your browser or perhaps some other source that gives timing info without directly calling timer apis. Presumably if privileged trusted code uses hires timers this is not a problem outside the cloud? Or is it? So question becomes are hires timers needed in _untrusted_ code on the desktop.

            Let assume for the argument downloaded userland exes _are_ trusted, but js and sandboxed code is not.

            And...

            fight.

    2. Tinslave_the_Barelegged Silver badge

      Re: timing attacks

      > Cant you just reduce timer accuracy for untrusted code and get all your performance back?

      There is a good discussion on this at LWN this week - https://lwn.net/Articles/742702/

      As comments there are by People Who Know, I can only understand every fifth sentence or so, but it boils down to the fact that nothing is simple any more.

      1. handleoclast Silver badge
        Thumb Up

        Quote of the year

        Nothing is simple any more.

        Sums it up nicely. Sums everything up nicely.

        1. Stoneshop Silver badge

          Re: Quote of the year

          Nothing is simple any more.

          Sums it up nicely. Sums everything up nicely.

          And outside of the domains where things aren't simple because of the subject matter, people are hard at work (and often failing, luckily, but still) to make simple things not simple any more. E.g. Juicero, the Otto lock and many other such ventures.

      2. stephanh Silver badge

        Re: timing attacks

        > Cant you just reduce timer accuracy for untrusted code and get all your performance back?

        Note that this is currently being implemented as a software mitigation for Javascript in browsers.

        https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/

        Interestingly, they also needed to disable SharedArrayBuffer (shared memory between two threads). Because a second thread which is simply incrementing a counter in shared memory can be used to synthesize a high-resolution timer.

        For native code this would effectively require forbidding (shared-memory) multi-threading.

      3. CrazyOldCatMan Silver badge

        Re: timing attacks

        but it boils down to the fact that nothing is simple any more

        The only people that still think that things are simple are children and politicians. The former because they don't yet know any better and the latter because it hides the fact that they don't know anything about what they are talking about..

    3. Anonymous Coward
      Anonymous Coward

      Re: timing attacks

      It's been reported that some browsers will be taking that approach but it'll mostly provide a false sense of security. There are apparently innumerable ways of devising high-resolution timing mechanisms as shown in this excellent paper that's been going around: https://gruss.cc/files/fantastictimers.pdf

      1. Nick Ryan Silver badge

        Re: timing attacks

        High resolution timers in JS? Given that JS is interpreted and does not have direct memory access, just how is it is going to be used to trigger Specter, let alone Meltdown flaws? The asm code for these is relatively trivial, however unless one can trick an interpreter, or enven a JIT compiler, into generating specific asm code how is it to be executed.

        On the other hand there may be an issue with exploits allowing the execution of arbitrary (asm) code on a system - however these won't need to rely on JS for their timers... But executing arbitrary code on a system is a problem anyway.

        /confused

        1. Michael Wojcik Silver badge

          Re: timing attacks

          Given that JS is interpreted and does not have direct memory access, just how is it is going to be used to trigger Specter, let alone Meltdown flaws?

          If only there were a freely available paper that explained this...

          First, note that modern browsers all JIT Javascript into machine code. It's not interpreted by any of the major browsers, at least in their default mode.

          The Javascript PoC is for Spectre, not Meltdown. And it's quite straightforward. All you need is the pieces required for a cache-timing attack (easily achieved in Javascript with a high-res timer and a byte array), and a suitable gadget, which you can write in the script itself. The gadget just does a conditional read from memory; by (mis-)training the branch predictor, you can get the CPU to speculatively execute the load with an out-of-bounds address.

          The Spectre paper authors disassembled the JITed Javascript so they could tweak the source to produce the instruction sequence they wanted, but that's just a shortcut; a script could easily include different variations.

  3. Anonymous Coward
    Anonymous Coward

    Optimistic me...

    ...hopes that maybe this will make intel learn their lesson. Maybe we can even get the IME spy-computer thrown out together with this iteration of bollocks.

    1. CrazyOldCatMan Silver badge

      Re: Optimistic me...

      hopes that maybe this will make intel learn their lesson

      Nah. They'll just learn to hide it better in future.

      </cynical mode>

  4. Cl9

    Should Intel (and other chip makers) be held responsible for hardware flaws?

    It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug. Modern CPUs are just so incredibly complex, containing billions of transistors, that I don't think it's feasibly possible to create a 'flawless' CPU, there's always (and always will be) bugs and flaws, discovered or not.

    I'm also not sure if you could pin the potential performance loss on Intel either, as that's technically the operating system vendor who's implementing the slow down.

    Don't get me wrong, I've got an Intel CPU myself, and I can't say that I'm too happy about this either. But I can't really blame Intel for it either. And yes, Intel's PR release was absolute BS.

    1. Simon 15

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Answer: Yes

      Perhaps if Intel doesn't have sufficient expertise in designing and fabricating processors they should outsource the job to another company such as AMD or Arm therefore leaving them to focus on their core business instead which is of course.... doh!

      1. Cl9

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        But both AMD and ARM are also vulnerable to related (both to do with speculative execution) flaws, such as Spectre. I'm not sure what your point is.

    2. Boris the Cockroach Silver badge
      Meh

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Depends when they knew about it

      If it can be shown that intel manglement knew about the bug and yet kept on baking/selling chips regardless then I'd suspect they wont have a leg to stand on

      But the lights are on late at Intel HQ and that the nearest home office to them has just sold out of paper shredders.....alledgedly

      1. Roo
        Windows

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        "If it can be shown that intel manglement knew about the bug and yet kept on baking/selling chips regardless then I'd suspect they wont have a leg to stand on"

        There are plenty of published show-stopper errata that show Intel doing exactly that over several decades. Customers typically decide that the expense of the lawsuit combined with the publicity that shows their products/services are impacted by it would do more damage than the errata...

    3. coconuthead

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Intel's x86-64 CPUs are more complex than CPUs need to be in order to support backward compatibility to the architecture. Their whole marketing strategy is "the complexity doesn't matter, we can do that and make it work". Well, it turns out they couldn't and didn't, but in the meantime other potential competitors either have reduced market share or were never deveoped.

      1. Michael Wojcik Silver badge

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        Intel's x86-64 CPUs are more complex than CPUs need to be in order to support backward compatibility to the architecture.

        That has nothing to do with Meltdown or Spectre.

        Meltdown exists because of an architecture choice: allow speculative loads across privilege boundaries (i.e., don't make a privilege check before allowing a speculative load). That's why AMD x86 CPUs don't suffer from it - it's just one of the choices you can make when implementing the x86-64 ISA.

        Spectre exists because the CPUs provide speculative execution and caches. So do pretty much all general-purpose CPUs. x86-64 would never have survived in the market, at least not for server systems, if it hadn't adopted those. Lack of speculation is one of the reasons Itanium had so much trouble bringing its performance up, and with the very deep pipelines of x86 (necessary for adequate performance with backward compatibility), a non-speculative x86 would not have been competitive.

        x86 has had spec-ex since 1995, with the Pentium Pro. In fact it had it back in 1994 with the NexGen Nx586, but that was never widely used and NexGen was purchased by AMD in '95. (As far as I know, neither the earlier NexGen design nor competing chips from AMD and Cyrix did spec-ex, though the techniques date back at least to the late 1980s.)

    4. SkippyBing Silver badge

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      'Modern CPUs are just so incredibly complex, containing billions of transistors, that I don't think it's feasibly possible to create a 'flawless' CPU, there's always (and always will be) bugs and flaws, discovered or not.'

      Replace CPU with aircraft* and ask if you wouldn't blame Airbus for them not being flawless?

      *I was going to say 'and transistors with parts' but I think both apply these days.

      1. Anonymous Coward
        Anonymous Coward

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        It's widely reported that Intel was informed about this flaw in June 2017.

        I kinda missed at that time the Intel announcement that they stopped selling all CPUs effectively immediately and that they recommended you got an AMD,

        IANAL, but it seems they have been knowingly selling defective products since June.

        AC because whatever you can say about Intel's chip designers, their lawyers are top notch.

      2. Chris Miller

        @SkippyBing

        Airbus software is not flawless, nor is Boeing or any other large, complex, safety-critical software. Humans can't write millions of lines of perfect code, and I suspect that doing so will always be infeasible.

        But (of course) safety-critical systems are (or, at least, are capable of being) developed to higher standards than 'normal' software. It would be possible for Intel or any chip manufacturer to adopt similar development processes, but the effects would be to significantly slow development, while simultaneously increasing costs. It may be that there are loads of customers out there looking to pay a lot more for a chip that's two generations behind - but I somehow doubt it.

        1. Doctor Syntax Silver badge

          Re: @SkippyBing

          "But (of course) safety-critical systems are (or, at least, are capable of being) developed to higher standards than 'normal' software."

          And what's the point if the H/Wit runs on isn't?

          1. LOL123

            Re: What about auto-updates?

            I don't get the comparison to aircraft, they are specifically sold with safety assurances and hence as commented use a different development process.

            The CPU is a part, and it is the procuring entity/system manufacturer that is responsible for assessing suitability and fitness for purpose.

            If Intel claimed suitability this is a different matter. No one has pointed to any evidence of this.

            You can ask why do SW like linux and windows store critical data in such a fashion to gain performance? Intel will say this is not a secure implementation and the OS vendors mis-represented performance by compromising security.

            The corollary here is that insecure CPUs are illegal to be sold. Who said so? Which law forbids this??

            Bad PR for Intel yes, but this is not remotely the same as being illegal.

        2. Roo
          Windows

          Re: @SkippyBing

          "but the effects would be to significantly slow development"

          I suspect Intel's "Tick/Tock" development model with releases being pegged to a particular date in time years before they are even developed contributes to the problem. Intel been pushing stuff out of the door before it's been fully baked to meet a marketing deadline for a while now.

          1. Chris Miller

            Re: @SkippyBing

            I suspect Intel's "Tick/Tock" development model with releases being pegged to a particular date in time years before they are even developed contributes to the problem.

            You may well be right. But that's just another aspect of the need to get your latest fastest model out into the market asap, otherwise customers will start switching to your competitors. We see the same problem with software being released before it's quite ready. Customers don't really want security (though they will scream about it, but only after the event): they can't see it, they can't measure it; it slows things down - and they're certainly not prepared to pay extra for it.

      3. close

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        The biggest issue here is that there is no proper fix. You can't replace a few transistors, you can't receive a downloadable CPU block, all you can do is disable some of the built in functions, make them work in other ways than intended.

        In a plane you swap the offending part, you update the offending software and the end result is as expected, not gimped but sort of does the job. With billions of CPUs out there it will take years to hear the end of this.

    5. Doctor Syntax Silver badge

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      "It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug."

      So if you catch a nasty dose of food poisoning the restaurant with the poor hygiene shouldn't be held responsible because it wasn't an intentional bug?

      1. Cl9

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        Food hygiene processes are relatively simple and there's a set list of guidelines and requirements that need to be met.

        This is not the case with CPU design so you can't really compare the two. If a restaurant is breaching existing food safety standards, then of course they should be held liable. There are no such requirements or standards for CPU design.

    6. Ken Hagan Gold badge

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      "It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug."

      I agree it is interesting, and I might even agree that Intel shouldn't be held liable, but if I did then I would have a different reason for doing so. The issue is not intent, but negligence. I don't think anyone close to the action is suggesting that Intel knew about this prior to mid-2017. It would be nice to think that our spooks knew about it before then, and distressing to imagine that the other side's spooks knew about it, but in neither case would we expect Intel to be informed. So the question is: is the flaw sufficiently obvious that we can call it negligence. Well ... given that it took just about everyone 20 years to work it out, I don't think we can call it obvious.

      Oh, and I also agree that Intel's PR release was BS. I'd be happy to see them prosecuted for *that*. I'm also pretty unhappy about the timescale surrounding their CEO's share dealings.

      1. Mark 85 Silver badge

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        I'm also pretty unhappy about the timescale surrounding their CEO's share dealings.

        I read in one news article that the SEC will be looking into this.

  5. GrapeBunch Bronze badge

    In one article I read that the problem affected all Intel processors manufactured since 1995. Or is it Intel 64-bit processors since 1995? Or some other subset?

    1. Steve Davies 3 Silver badge

      Re: Which Intel CPU's

      AFAIK,

      ATOM's are immune because they don't use branch preduction or Out of Order Execution.

      Everything else is vunerable.

      1. Pete 47

        Re: Which Intel CPU's

        'Fraid not, pre 2013 Atoms are, but later versions like the Z36/Z3700 series (like my Dell tablet) support OOOE.

        Hopefully given the usage profile of such devices the performance hit from the updates shouldn't be overly noticeable.

      2. MarcC
        Facepalm

        Re: Which Intel CPU's

        Not quite. I've researched all back issues of The Register and came up with this :

        "Deep inside Intel's new ARM killer: Silvermont"

        "the new Atom microarchitecture has changed from the in-order execution used in the Bonnell/Saltwell core to an out-of-order execution (OoO), as is used in its more powerful siblings, Core and Xeon, and in most modern microprocessors."

        http://www.theregister.co.uk/2013/05/08/intel_silvermont_microarchitecture/?page=2

        1. Claptrap314 Bronze badge

          Re: Which Intel CPU's

          Out of order is not the same as speculative. You can do OoO for everything but branches and computed loads, and no instructions will be speculative. Better dig deeper into the specs.

  6. Sceptic Tank
    Terminator

    Stephen Hawking

    This morning it occurred to me: Stephen Hawking has that Intel Inside logo on the screen attached to his wheelchair. So I'm assuming his speech synthesizer runs on some variant of Intel silicon. If he starts talking 30% slower he's going to sound like he is brain damaged.

  7. Anonymous Coward
    Anonymous Coward

    Is Intel guilty of negligence?

    According to reported CPU details, Intel could be deemed guilty of gross negligence for failing to maintain proper security level command execution in an effort to gain a minute performance increase. Neither AMD nor ARM CPU architectures suffer from this lapse of good judgment. As a consequence Intel is the only brand of CPU to actually experience a ~30% performance hit because Intel CPUs can only mitigate the security issue via software. Lawyers and consumers are bound to believe that Intel should be held accountable for their willful negligence in knowingly selling an insecurely designed CPU.

    1. Gordon 10 Silver badge

      Re: Is Intel guilty of negligence?

      Except we know that's not true. Both Arm and AMD are vunerable too albeit to a lesser extent. Which means there are at least genuinely novel and unforeseen aspects of these vunerabilities.

      Unless Intel did nothing for 6 months I'm not sure they deserve ambulance chasing. I rather suspect the back channels between the chip designers have been running hot for the last 6 months.

      These flaws are so severe that a reasonable case for secrecy can be made as long as those who needed to know (OS designers mostly) were kept informed.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019