back to article Meltdown, Spectre bug patch slowdown gets real – and what you can do about it

Having shot itself in the foot by prioritizing processor speed over security, the chip industry's fix involves doing the same to customers. The patches being put in place to address the Meltdown and Spectre bugs that affect most modern CPUs were supposed be airy little things of no consequence. Instead, for some unlucky people …

  1. Nate Amsden Silver badge

    hyperconverged

    Haven't noticed anyone talk about this yet, but given the hit is much harder on systems that do a lot of syscalls I am curious the impact to hyperconverged systems.. Standalone storage systems that primarily leverage CPUs for storage stuff could do without the fixes since they are generally tightly controlled running only trusted software. Hyperconverged of course doesn't quite have that luxury in a typical deployment scenario.

    Of course if your hyperconverged system isn't pushing much I/O then you probably won't see a big impact.

    The lustre results are interesting.

    1. kryptylomese

      Re: hyperconverged

      My test ProxMox cluster (Intel Xeon x5675 based) and the guest systems that are comprised of both Windows 10 and Linux under KVM as well as a bunch of Linux containers under LXC have not seen any noticeable impact.

      Workloads include databases, file servers, PXE boot servers, web servers and PCIe pass though of both Graphics and Sound Card testing with intensive applications including games.

  2. Anonymous Coward
    Anonymous Coward

    So how much will this throw Intels release schedule out by?

    If it takes Intel 12-18 months of engineering and testing before they can release replacement architecture does that mean no CPU based speed increases before 2020?

    If it does what will be the likely revenue hit up and down the channel?

    1. TReko
      Stop

      Don't buy a new Intel based system for a while?

      There may be a huge hit to PC sales.

      Our company has temporarily suspended all but essential new system purchases until fixed silicon is available or we switch to AMD or we figure out what the performance hit is.

      1. illiad

        Re: Don't buy a new Intel based system for a while?

        BUT!! AFAIK it affects AMD and apple too... : O : O

        1. DDearborn

          Re: Don't buy a new Intel based system for a while?

          Hmmm

          Since AMD is not effected by one of the bugs and less so by the other AMD processors are far less effected. At this point, the only reason that AMD systems are recording any significant slow downs is because Microsoft is forcing AMD users to incorporate ALL the bug fixes onto their system even though they are not needed and this is causing the additional slow downs. A person would have to be truly daft not to believe that Intel and Microsoft, both of whom had recently hit a wall with consumers on their latest offerings and were beginning to lose significant market share to AMD/Linux are colluding to improve their lagging sales (Intel because their newest CPU's are wildly expensive, hot running, energy hogs. And Microsoft because Windows 10 is nothing more than spyware in drag and a huge resource hog)

      2. pogul

        Re: Don't buy a new Intel based system for a while?

        > Our company has temporarily suspended all but essential new system purchases until fixed silicon is available or we switch to AMD or we figure out what the performance hit is.

        Except, AMD and ARM designs are affected too -- which everyone seems to be ignoring. The initial reports that broke in the media said AMD and others were unaffected: this is not the case, as has been widely publicised.

        I can't help wondering (as I haven't dug deeply into the speculative code execution thing) - is this flaw inherent in this architecture or did certain companies reverse engineer another company's implementation?

        1. Alan J. Wylie Silver badge

          Re: Don't buy a new Intel based system for a while?

          AMD is (mostly) not affected by Meltdown (userspace reading kernel memory). It is affected by Spectre (userspace reading userspace memory, either in the same process, e.g.3rd party Javascript reading cookies in a web browser or one process reading another processes memory).

          There is, however a case on AMD when eBPF JIT is turned on, which allows userspace to read kernel memory.

          AMD doesn't have PCID.

          IBM have announced that firmware and kernel patches for their POWER architecture are will be released soon.

          1. Paul Shirley

            Re: Don't buy a new Intel based system for a while?

            "AMD when eBPF JIT is turned on"

            While running interpreted out of bound access will be checked and speculative execution will be in the interpreter, not an attacker controlled address. Turning on JIT allows an attacker to craft code that will be compiled to machine code to run potentially without checks. It's a way of weaponising an otherwise unusable kernel exploit.

            If enabling JIT does make AMD vulnerable then AMD is vulnerable in that test and you read too much into this. I believe the only test they succeeded with was user to user snooping which is expected to work. The more frightening user to kernel Meltdown blocking claim hasn't been disproved yet.

          2. Nick Ryan Silver badge

            Re: Don't buy a new Intel based system for a while?

            Please explain how JavaScript, which is an interpreted language, can be used to trigger the exact cache violations required to access memory owned by other processes. The exploit code is relatively trivial assembler code, however without using an exploit allowing the arbitrary execution of pre-compiled code, how is JavaScript code going to replicate the same cache violations?

            /confused

            1. Warm Braw Silver badge

              Re: Don't buy a new Intel based system for a while?

              The Spectre paper goes into great detail, but there's a summary here.

              JavaScript isn't necessarily interpreted - the example exploit takes advantage of the JIT compiler in Chrome whose output is predictable machine-language instructions.

              In the same way, the eBPF JIT compiler can be used to inject known code into the kernel, if eBPF is enabled.

              The temporary workround is to reduce the resolution of timers available to JavaScript (to make the cache differences harder to spot). A longer-term resolution will involve, amongst other things, changing the JIT compilers to emit code that includers speculative execution barriers (where relevant and available).

              1. Name3

                Re: Don't buy a new Intel based system for a while?

                Maybe we should interpret Javascript again, instead of doing JIT. Seems safer. And get rid of WebASM, no one is using it anyway, it's too new and as we see very insecure.

                1. PlinkerTind

                  SPARC immune to Meltdown

                  but susceptible to Spectre.

          3. RandSec

            Some AMD Clarity

            AMD is not affected by Meltdown, but Intel is affected and requires a sometimes crippling patch.

            AMD is affected by Spectre, in the same way as Intel, and both only have partial mitigation so far.

            AMD EPYC servers have hardware encryption for VM's, which means a Spectre attack can succeed and still get only encrypted data.

        2. Downside

          Re: Don't buy a new Intel based system for a while?

          No need to reverse engineer - speculative execution is not a secret, it's just hardware engineers don't have an appreciation of software and/or couldn't be bothered to clear down the registers on execution failure.

          For once you can't "fix it in software" - which, if you've ever worked for a co that designs hardware, is a big fat chicken coming home to roost.

        3. Giles Jones Gold badge

          Re: Don't buy a new Intel based system for a while?

          ARM and AMD are affected less. Plus you forget one thing about ARM, they don't make their own processors and therefore those affected can quickly fix the chips they use without waiting for a manufacturer to issue the new part.

        4. John Brown (no body) Silver badge

          Re: Don't buy a new Intel based system for a while?

          "Except, AMD and ARM designs are affected too -- which everyone seems to be ignoring. The initial reports that broke in the media said AMD and others were unaffected: this is not the case, as has been widely publicised."

          That's because everyone is conflating 3 issues. The first one is Intel specific, the other two affect other CPUs to varying degrees and likely more difficult to exploit.

        5. Anonymous Coward
          Anonymous Coward

          Re: Don't buy a new Intel based system for a while?

          What I heard was that Intel, and one type of ARM core, was hit by the Meltdown problem, Spectre hits everything, and there is a third AMD bug that requires physical access to the hardware to exploit.

          What I also heard was that Meltdown can be patched against at OS level, while Spectre needs everything patching.

          The stuff being published by the media is contradictory, and I am getting to the point where I don't trust anyone to know what they are talking about. It also looks as though labels such as Athlon and Phenom were used by AMD for products of the same line, depending on how many cores worked.

          Why should I trust anyone?

    2. Anonymous Coward
      Anonymous Coward

      Re: So how much will this throw Intels release schedule out by?

      Well, they released Coffee Lake despite knowing about it.

    3. Archtech Silver badge

      Re: So how much will this throw Intels release schedule out by?

      Have there really been any CPU based speed increases since about 2000?

      1. d3vy Silver badge

        Re: So how much will this throw Intels release schedule out by?

        "Have there really been any CPU based speed increases since about 2000?"

        Your asking if there have been improvements since 2000?

        The fastest CPU I could find released in 2000 was a 1.5Ghz AMD - Didnt have the model name but given the year I'd guess early Athlon... 32bit.

        I was working doing system builds in 2002 and remember some of the first 64bit CPUs coming in.

        ---

        So to answer your question... YES.

        You sound like you've been lucky enough never to have to borrow the "spare training laptop" at work.. you know, the one at the back of the cupboard... :)

    4. Giles Jones Gold badge

      Re: So how much will this throw Intels release schedule out by?

      This will motivate many to investigate AMD or ARM. You certainly shouldn't reward such major incompetence with repeat business.

      Pretty sure Intel will have devalued their offerings by ruining their reputation, they sure won't be changing a premium price for a while.

    5. d3vy Silver badge

      Re: So how much will this throw Intels release schedule out by?

      "If it takes Intel 12-18 months of engineering and testing before they can release replacement architecture does that mean no CPU based speed increases before 2020?"

      Dont be daft theres a software fix now, this wont affect the schedule at all, If we assume that they are working now on the CPUs that we will be getting in early 2020 I dont think we will see fixed silicone until 2021/22.

      I'd put money on the new "FIXED" CPUs project not even being a dot on some project managers Gantt chart yet.

      1. Anonymous Coward
        Anonymous Coward

        Re: So how much will this throw Intels release schedule out by?

        ---> Dont be daft theres a software fix now, this wont affect the schedule at all, If we assume that they are working now on the CPUs that we will be getting in early 2020 I dont think we will see fixed silicone until 2021/22.

        I'd put money on the new "FIXED" CPUs project not even being a dot on some project managers Gantt chart yet.

        ----------------------

        There isn't a fix now, there's a performance degrading patch (in some instances seriously degrading).

        Until Intel release replacement Silicon that has a proper fix I wouldn't be surprised to see a 30-40 percent drop in sales.

        Corporate IT Managers will not order silicon with a known flaw (regardless of the patch) unless they absolutely have to, because people get fired over this kind of serious shit.

        1. Roo
          Windows

          Re: So how much will this throw Intels release schedule out by?

          "Corporate IT Managers will not order silicon with a known flaw (regardless of the patch) unless they absolutely have to, because people get fired over this kind of serious shit."

          Few of the folks making the purchasing decisions read the errata let alone wait long enough for the showstopper errata to be discovered. Errata such as ECC failures leading to undefined behaviour didn't stop or noticeably delay folks buying the last few gens of Xeon in for example...

    6. Hans 1 Silver badge

      Re: So how much will this throw Intels release schedule out by?

      does that mean no CPU based speed increases before 2020?

      Well, we have not really had a speed increase since 2014 on Intel anyway, in some cases lower TDP, yes, slightly higher clock speeds and 6% performance increase ... negligible ....

      I have a 5820K and will not get the 6800K, nor 7800X when the 5820K kicks the bucket ... Ryzen all way

  3. MysteryGuy

    Windows 7 may not use PCID capability.

    On the few Windows 7 64-bit systems I updated with the meltdown patch, none seem to show PCID optimization enabled. At least according to the MS Powershell Get-SpeculationControlSettings script.

    This is on systems which do have PCID and INVPCID capabilities. So, this may not be an option under Windows 7.

    1. Anonymous Coward
      Anonymous Coward

      Re: Windows 7 may not use PCID capability.

      That's right, it's only supported on Windows 8.1+. The only Linux distro I know of that supports it is OpenSUSE Tumbleweed, though I'm sure there are probably others.

      1. Paul Greavy
        Unhappy

        Re: Windows 7 may not use PCID capability.

        "That's right, it's only supported on Windows 8.1+. The only Linux distro I know of that supports it is OpenSUSE Tumbleweed..."

        Hmmm... I installed the patched kernel, rebooted and the workstation crashed while loading the desktop. Repeated three times with the same results, before rolling back to the previous kernel. So I'm not sure that Tumbleweed supports this, yet.

        1. Chemist

          Re: Windows 7 may not use PCID capability.

          "So I'm not sure that Tumbleweed supports this, yet."

          Well I get :

          dmesg | grep isolation

          [ 0.000000] Kernel/User page tables isolation: enabled

          and CONFIG_PAGE_TABLE_ISOLATION=y in /boot/config-4.14.11-1-default.

          Booting fine but I am in a VM for the moment

        2. Anonymous Coward
          Anonymous Coward

          Re: Windows 7 may not use PCID capability.

          Are you sure you don't have OpenSUSE Leap? Tumbleweed is running on the latest version of Linux kernel, 4.14 while Leap is still under 4.4 which doesn't include PCID support. Any distro that's running Linux 4.14 has support for PCID.

  4. Anonymous Coward
    Anonymous Coward

    broadcastify.com

    I wonder if that explains the outage at Broadcastify.com. They had some "system maintenance" the other day, and seem to be having sporadic outages ever since.

    So if I were to run a reasonably secure system, could I bypass the update? reasonably secure as in I (ostensibly) control all the software on it, as opposed to a AWS instance where the provider needs to protect the customers from each other?

  5. AdamWill

    so, er...

    So, uh. Intel said:

    "Intel said as much in its statement, claiming "any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time.""

    and then you said:

    "While most casual desktop users and gamers won't notice any prolonged slowdown, or any performance hit at all, people running IO or system-call intensive software, such as databases on backend servers, may notice the difference."

    So...where exactly are you claiming that Intel was wrong, or misleading? Or are you suggesting that "average computer users" are running "databases on backend servers"?

    1. AdamWill

      Re: so, er...

      Three thumbs down, yet no counterpoints. Interesting.

      I'm not saying Intel's exactly being *up front*, here. But the story is more or less accusing them of *outright lying*, without backing it up. Intel says the impacts are "workload-dependent" and won't be significant to "the average computer user". Which, sure, is a very spin-ny way of saying they *will* be significant to workloads which *aren't* those of "the average computer user". But it's not actually a lie, and nothing the article presents as evidence actually contradicts it. In fact, the article agrees with it, explicitly, in the quote I pulled.

      1. Dan 55 Silver badge

        Re: so, er...

        I humbly submit that the average computer user will be impacted (ugh) now by cloud and server outages and soon by the consequent rise in prices in cloud services.

        If/when there are more kernel updates with optimisations, prices won't drop.

        1. AdamWill

          Re: so, er...

          I'm not disagreeing with you. But what I was really commenting on here was the *journalism*, not Intel's quote.

          The Intel quote isn't new - it's been quoted and referenced tons of times, including in at least three Reg articles. So anyone who's paying attention has already seen it. El Reg has also already explained, more than once, how it's more than a tad disingenuous. So when El Reg prints a *new* article, includes the quote *again*, and surrounds it with text like:

          "The patches being put in place to address the Meltdown and Spectre bugs that affect most modern CPUs were supposed be airy little things of no consequence. Instead, for some unlucky people, they're anchors."

          that reads to me like El Reg is suggesting something new - either that they can now somehow show that far more cases are going to affected than we previously thought (i.e. there are, for some reason, more syscall-dependent workloads out there than had previously been realized), or that Intel had claimed that the fixes wouldn't significantly affect performance for *anyone*.

          Yet in the end the article does neither - it just adds some random field reports of what we essentially already knew, i.e. that there *is* a significant performance impact on syscall-dependent workloads.

          That's not valueless, but it doesn't really pay off such a dramatic introduction, or justify the "airy little things of no value" line. That's what I was questioning.

    2. Anonymous Coward
      Anonymous Coward

      Re: so, er...

      I think the issue was using a comma where a semi-colon may of been more appropriate. I certainly read it as most people are not affected, but people running intensive work may be,. I know often we are not treated as such, but I do think those working in DC environments are still "people".

      Also I believe Intel were trying to say it was negligible for home users and up to 30% (max) for others, whereas home users are showing some impact and DC users are showing even higher than expected problems.

    3. John Brown (no body) Silver badge

      Re: so, er...

      "will be mitigated over time"

      I wonder if its this bit? How can hardware faults be "mitigated over time"? Surely only software can mitigate the hardware fault by working around it. Sure, patches with more time spent on them might get more efficient, but that is still never going to mitigate the slowdowns caused by the hardware fault workaround. Maybe Intels "over time" reference is about people replacing hardware on a time scale?

      In other words, they can't get rid of the slow downs until the users buy new kit.

  6. AdamWill

    also weird

    Also, this is clearly silly:

    "Via Twitter, Francis Wolinski, a data scientist with Paris-based Blueprint Strategy, noted that Python slowed significantly (about 37 per cent) after applying the Meltdown patch for Windows 7."

    Python is a *programming language*. (OK, it can also mean a particular interpreter for that language, but it makes no practical difference; the performance characteristics of the interpreter depend on the code it's interpreting). You can write something in it that does a lot of syscalls, or something that does very few syscalls. It makes no sense to suggest that "Python", as a single thing, can be slowed down by a single amount by these changes.

    Also weird, the graph claimed to be "An example AWS CPU utilization spike after installing CPU flaw" - the change described as a 'spike' (which, uh, isn't a spike) appears to be on 2017-12-22, which is two weeks before all this stuff came out. I haven't seen any suggestion that Amazon was patching AWS in December. Also, the change doesn't look at all like something you'd expect to be caused by the Meltdown patch: it seems like the instance suddenly started "idling" at about 59% CPU usage for some reason, instead of idling close to 0%. I've no idea what'd cause that, but it doesn't seem to match the characteristics ascribed to the Meltdown fix at all.

    1. MacroRodent Silver badge

      Re: also weird

      It makes no sense to suggest that "Python", as a single thing, can be slowed down by a single amount by these changes.

      Depends on its implementation. Python is an high-level interpreted language that may be doing a lot of things not explicitly written into the program code.. A wild guess: maybe the interpreter loop has code that occasionally queries the system time, which needs a syscall. Or polls some file descriptor state. I don't know if any of these is the case, but they are plausible. I guess I right now need to stop talking out of my ass, and go look at the actual (open source) code to see if I can find anything like that.

      1. MacroRodent Silver badge

        Re: also weird

        Answering myself: had a look with a test program and strace on Linux. I did NOT find any extra system calls in the interpreter loop. So there is NO intrinsic reason why Python should slow down more than similar code written in other programming languages.

        1. Richard 12 Silver badge

          Re: also weird

          My wild guess is that python is interpreted and so always loads a lot of files (source dependencies) during startup and possibly during runtime.

          A compiled application tends not to load as many files or dynamic libraries (other than the system dynamic libraries, which presumably python also needs anyway)

          1. Anonymous Coward
            Anonymous Coward

            Re: also weird

            >My wild guess is that python is interpreted<

            Python is always compiled. The rest of your comment was correct: python applications often consist of a number of library files.

            1. MacroRodent Silver badge
              Boffin

              Re: also weird

              Python is always compiled.

              Depends on what you mean by compiled. There are actually multiple Python implementations, but the most commonly used (the one from www.python.org) compiles into an intermediate "bytecode", and then interprets that.

    2. martinusher Silver badge

      Re: also weird

      I was under the impression that these bugs aren't concerned with syscalls but turn on the processor speculatively executing instructions in advance of a jump. Instruction execution in a high performance processor is a complex activity, the processor doesn't execute one instruction at a time but rather is in the process of processing several instructions at any instant. Since a branch will require the processor to dump the work in progress the designers add in duplicate register sets so that they can keep the instruction pipeline going regardless of whether a branch is taken or not. The design flaw centers around the access checks which are made only the instruction being executed -- completed, as it were -- rather than when it starts collecting the data the instruction needs to use. If the data being accessed by a particular instruction is in a protected area it won't trigger an exception unless that execution path is taken so this leaves a neat worm hole where you can get the processor to access areas of memory that it shouldn't without anyone noticing. The rest of the exploit is coming up with ways to deduce what's in that memory location. This is all a bit esoteric for most people -- they've got enough on their plate getting their code to work without having to go into the minutiae of exactly how the processor goes about doing its work.

      I personally think that we've asking for trouble for years by using highly optimizing compilers and complex processors to cover for writing inefficient code. This is why I'm not surprised by the variability of impact of the patches -- I'd expect purpose built, well coded applications to suffer little to no impact while I'd expect a lot of interpreted code, especially of the "hose it at a barn wall and see what sticks" type, to suffer quite a lot.

  7. Anonymous South African Coward Silver badge

    What will it take for Amazon et al to create their own, secure CPU?

    Or chuck out their current CISC infrastructure and switch over to a RISC-based one? (unlikely).

    1. Richard 12 Silver badge

      AWS customers want x86-64, so moving to ARM is unlikely.

      Unless AWS customers start demanding ARM of course.

    2. Lysenko

      Or chuck out their current CISC infrastructure and switch over to a RISC-based one? (unlikely).

      Given that we know ARM is impacted by these security flaws as well, the CISC/RISC calculation remains unchanged - unless you're suggesting they port the whole of AWS to MIPS.

      1. dbannon

        "Or chuck out their current CISC infrastructure and switch over to a RISC-based one? (unlikely).

        Given that we know ARM is impacted by these security flaws as well, ..."

        Ah, but apparently Itanium is not affected.

        (I remember the HP rep telling me it was the future of (HPC) computing, he'd know ....)

    3. LDS Silver badge

      "What will it take for Amazon et al to create their own, secure CPU?"

      A lot of time and money?

      Given all the main CPU supplier failed one way or the other - with Intel failing any of them - it's not exactly a simple and cheap task to build a high-performance high-security CPU.

      1. I Am Spartacus
        Go

        Re: "What will it take for Amazon et al to create their own, secure CPU?"

        Probably a wholescale redesign of the instruction set. Ditch the reliance on X86 based instructions, ditch compatibility with 32 bit software, and design a brand new instruction set based on 64 bit architecture. If you did this, you would design a processor that makes board design simpler, cheaper, and simply get rid of all the page spaces that cause so much of the problem.

        Digital did it in the lates 70's with 32-Bit VAX architecture, and tried to upscale this to Alpha. Thats died the death, because it was expensive, and eventually wasn't able to compete. Others tried, and fell, as the WinTel juggernaut rolled over all in its path.

        It would take a major investment by IBM, Intel, Motorola or AMD to build a new platform. That is a LOT of money and it would take decades to get market penetration to the point where it started to pay back.

        But I agree, it is probably what is needed.

        1. Danny 5

          Re: "What will it take for Amazon et al to create their own, secure CPU?"

          That would mean everyone would have to get replacements for any and all legacy apps, which is nigh on impossible for many companies.

          ditching X86-32 might be a a good solution, it's not a viable option I'm afraid.

          1. Wulfhaven

            Re: "What will it take for Amazon et al to create their own, secure CPU?"

            Or, since legacy stuff tends to be on the old side of things, enclose them in a virtual envirenment running in a safe sandbox on the fancypantsy new hardware.

          2. Roo
            Windows

            Re: "What will it take for Amazon et al to create their own, secure CPU?"

            "That would mean everyone would have to get replacements for any and all legacy apps, which is nigh on impossible for many companies.

            ditching X86-32 might be a a good solution, it's not a viable option I'm afraid."

            Folks were running x86-32 apps on UNIX with SoftPC in the 80s.

            Folks were running x86-32 apps on DEC Alphas with FX!32 in the 90s (I found that a very low end Alpha PC166 most apps were *quicker* than they were on a PPro-200 - and the stuff that wasn't was only 5-10% off).

            There is no technical barrier to emulating x86 at decent speeds in 2018, the only blockers are ignorance, politics and lawyers (licensing).

        2. Giles Jones Gold badge

          Re: "What will it take for Amazon et al to create their own, secure CPU?"

          There's a reason X86 is still around, it generally works well.

          Itanium (or Itanic as it was jokingly called) just didn't perform well despite being an elegant design. With Linus saying they threw away all the good parts of their designs.

        3. Warm Braw Silver badge

          Re: "What will it take for Amazon et al to create their own, secure CPU?"

          a wholescale redesign of the instruction set

          There's a lot of cruft in the instruction set, but this bug has got nothing to do with it. Variants of BOTH Meltdown and Spectre are found in computer architectures that are unrelated excepting their common use of speculative execution.

          The common origin of these bugs is that CPU instructions execute a lot faster than memory accesses and ever more complex ways are being found to reduce the inevitable stalls in the instruction pipeline to a minimum. You'd need a very different kind of CPU - and very likely a very different kind of software and different kind of application domain - to make that go away.

          1. asdf Silver badge

            Re: "What will it take for Amazon et al to create their own, secure CPU?"

            Yep in fact a whole lot of old ia32 processors won't even be vulnerable to either Meltdown or Spectre. Of course they won't be able to run modern GUIs that well (yes yes JWM may still work but go away that guy) either. Sadly you may have to be that guy to have a secure computer ie pre Intel ME and UEFI.

      2. Roo
        Windows

        Re: "What will it take for Amazon et al to create their own, secure CPU?"

        "it's not exactly a simple and cheap task to build a high-performance high-security CPU."

        Agreed, but folks following RISC design principles find it a lot cheaper and easier than building a fast x86... The design team sizes and benchmark results from the days when RISC vs x86 was a thing speak volumes for that.

    4. Anonymous Coward
      Anonymous Coward

      "What will it take for Amazon et al to create their own, secure CPU?"

      And how much will that cost?

      1. illiad

        Lots of time, lots of money lots of building space.... :/

        ever wonder why there are only TWO mainstream CPU makers?????

    5. anonymous boring coward Silver badge

      Correct me if I'm wrong, but isn't modern X86 (x64) basically RISC architecture nowadays? Deep pipeline, very high frequencies, speculative execution, multiple execution units, and all the other benefits of RISC?

      (It's been a few decades, so I may have forgotten a few things.)

      I suppose the "reduced" in "RISC" may be somewhat missing, making it harder to verify security...

      1. Danny 5

        CISK/RISK

        Probably for the X86-64, but not the 32 bit stuff. I'd assume anything that's 64 bit will be RISK nowadays, but we're still dealing with that pesky 32 bit legacy crap on X86.

        A native X86-64 CPU would likely be RISK (I guess?)

      2. John Brown (no body) Silver badge

        "Correct me if I'm wrong, but isn't modern X86 (x64) basically RISC architecture nowadays?"

        That's my understanding too. X86 is basically an emulation running on the RISC core. I thought that's what the microcode was all about, ie the CPU is a computer running a microcode programme to pretend to be a CISC X86. I wonder what it would be capable of if the X86 bit was removed or some other way of exposing the native silicon to programmers?

        1. Roo
          Windows

          "That's my understanding too. X86 is basically an emulation running on the RISC core."

          I think misrepresents what goes on. I'm not an authority on the topic, but here's my take on it:

          The (CISC) instruction decode stage(s) breaks the commonly used instruction sequences down to "micro ops".

          Breaking down a multi-cycle 'CISC' instruction into lots of little u-ops then executing it in parallel with lots of other multi-cycle 'CISC' instructions poses some problems how to convey the illusion to the kernel & user that the instructions are executed in an atomic way... That entire set of quite gnarly gotchas is simply not an issue for a true RISC style design - by intent and design.

          Some operations won't fit into that nice model - and for those we have microcode... Even 'RISC' chips can have microcode to handle the stuff that just doesn't fit. The Alpha had something slightly different called PALcode to handle those cases - where essentially the CPU was using a library of routines with access to implementation specific instructions... The ISA remained clean and it gave the DEC engineers a shot at implementing the machine specific crap in a RISC friendly way while keeping the details hidden from the users...

          For a giggle I recommend tracking all the volumes describing the current Intel x86-64 ISA and then compare to the equiv. DEC Alpha ISA reference manual (its' much shorter)... All available for free and locatable via Google... The page count gives you a measure of how much more 'challenging' it would be to validate an x84-64 derivative... If you actually have a crack at digesting both you'll probably give up long before you get through the x86-64 manuals so I recommend starting with the Alpha first. ;)

          YMMV

      3. Anonymous Coward
        Anonymous Coward

        One of the central ideas of RISK was that by using a good compiler, you could break down a complex program into an optimum reduced instruction set program. You could do speculative execution as part of the compilation stage.

        Intel demonstrates that you can implement that "good compiler breaks down CISC into optimum RISC" in silicon, without a speed hit, at a competitive cost.

        1. Anonymous Coward
          Anonymous Coward

          aren't you missing something?

          "Intel demonstrates that you can NOT easily and securely implement that "good compiler breaks down CISC into optimum RISC" in silicon, without a speed hit, at a competitive cost."

          FTFY.

      4. Ian Joyner

        "Deep pipeline, very high frequencies, speculative execution, multiple execution units, and all the other benefits of RISC?"

        These speed up techniques are orthogonal to RISC and architecture – they can be used in any architecture. RISC has somewhat been oversold in this respect.

        RISC was originally to get as much on a single chip as possible (by making functionality simple) and thus making things fast, but this is no longer a constraint.

        1. Chemist

          "RISC was originally to get as much on a single chip as possible (by making functionality simple) and thus making things fast, but this is no longer a constraint."

          I thought RISC was an approach supported by measurements of actual operations in real program execution which supported the idea that compilers usually used a limited set of 'simpler' instructions., and therefore the masses of increasingly complex ops being added took-up too much silicon for their limited usage . ( Even the hard-wired 6809 had ~ 6000 op-codes from memory). I also remember a BYTE article describing one of the earliest RISC cpus being designed by students.

          1. Ian Joyner

            "I thought RISC was an approach supported by measurements of actual operations..."

            Certainly it is a good idea to find a minimal set of instructions. However, there could be instructions that compilers don't use but are necessary. Certainly, things like bounds checks should be orthogonal to the instruction set. If you need a bounds check instruction subversion is easy - just don't use it.

            I don't think the 6800 (6809??) had anything like 6000 op codes. Don't have time to research it now.

            I think a lot of misunderstanding is around the RISC issue. Certainly as much decision making is pushed up to compilers (or programmers) as possible. But trusting programmers and compilers is naive at best.

            1. Chemist

              "I don't think the 6800 (6809??) had anything like 6000 op codes."

              6809

              I was referring to the very extensive set of addressing-modes. which combined with the basic instructions generated ~~6000 op-codes. The data sheet mentions 1464 instructions.

          2. RandSec

            6809

            The 6809 was a completely different era. The corporate goal was to "extend the life of the 8-bit family." It used depletion-load NMOS, not CMOS, and the processor was about as slow as RAM, minimizing the advantages of caching. The hardware interpretation cycle of fetch, decode, execute was a real thing.

            The architecture was intended to fix problems known in using the 6800, not carry them along. That meant instruction (and hardware) incompatibilities were assured, which implied only assembly-language level compatibility. Unfortunately, that turned out to have an unexpected and substantial marketplace cost. Addressing modes were greatly expanded and complexly encoded which meant that machine-level debug was also more complex. Existing OEM designs needed to be redone. 20-20 hindsight lets us see the market value of full backward compatibility, even when future design will move beyond the old stuff.

            The 6809 certainly did not decode 6000 instructions individually, and was not based on a central state machine. The first byte of opcode (256 choices) was decoded by logical group to start timing lines which then handled subsequent bytes if any. That implementation (extended from the 6800) turned out to be relatively simple, reliable and easy to debug, but took way too much space, and so was the end of the line.

            They did end up selling full boxcar loads of 6809's to the gaming industry of the time.

            1. Chemist

              Re: 6809

              My comment was meant to illustrate that even at the time of the 6809 processor instruction sets were getting rather complex. You could write complex data structure traversing code in just a few bytes in assembler ( The important FORTH word NEXT was just 4 bytes long ). I don't think any compiler would have used most of the instruction set. 6000 was the approx. number of unique op-codes taking into account the very extensive set of addressing modes.

  8. Anonymous Coward
    Anonymous Coward

    Software mitigations will have backdoors

    Nasty nation states will love this.

    1. asdf Silver badge

      Re: Software mitigations will have backdoors

      Makes you feel all warm and fuzzy when our society is completely reliant on technology insecure at the root and lot of our enemies are just fine with sleeping in caves or at least with Norks using 1950s/60s tech. No chance for any asymmetric warfare there.

  9. Arisia

    Calling BS on the CPU graph

    That's clearly some spinning processes eating several cores with the original load overlaid on top.

    Nothing to do with the patch, except that the server didn't restart cleanly.

    1. Richard 12 Silver badge

      Re: Calling BS on the CPU graph

      No there isn't.

      Look at the scale - the bottom line is 20%, not zero.

      The base load has jumped from 20% to 60%. So this is the top end of the estimated impact.

      1. Destroy All Monsters Silver badge
        Paris Hilton

        Re: Calling BS on the CPU graph

        Ok, so just by listening to the gas turbine outside of a datacentre you can determine whether the patch has been applied or not?

      2. 2+2=5 Silver badge

        Re: Calling BS on the CPU graph

        @Richard12

        > Look at the scale - the bottom line is 20%, not zero.

        The bottom line is zero (at least on the version on the website when I just looked) but just not labelled as such.

        The original chart clearly has some time at zero CPU therefore - if the workload hasn't changed - there should be some zero CPU post patch. There isn't any, which leads to the conclusion that there is something else wrong.

        1. AdamWill

          Re: Calling BS on the CPU graph

          Also, as I mentioned in an earlier comment, the date doesn't make sense: the jump occurs on 2017-12-22. I haven't seen any suggestions Amazon was patching AWS for Meltdown on 2017-12-22.

        2. John H Woods Silver badge

          Re: Calling BS on the CPU graph

          "The original chart clearly has some time at zero CPU therefore - if the workload hasn't changed - there should be some zero CPU post patch. There isn't any, which leads to the conclusion that there is something else wrong."

          Sorry, but having been a performance engineer for some time I'd have to say that although you might be right that there is something else wrong here, it is simply not the case that a workload which has some zero % CPU should still show some idle time when a performance-impacting patch has been applied: simplest case, the patch could be causing enough impact for the processor queue to never be empty.

  10. Si 1

    Are we sure gaming won’t be affected?

    Lots of games these days are “open world” which means the system is constantly streaming new chunks of the landscape from disk.

    I would imagine those sorts of games would be affected by this, as a general example they try to predict where the player will go next and often stream in the next area they think the player will visit. If the player then turns around the game has to hurriedly dump what it has loaded and stream in the data for the other direction.

    1. Anonymous Coward
      Anonymous Coward

      Re: Are we sure gaming won’t be affected?

      It's still not a huge impact, it may be "open world" but really it's only predicting whats just slightly off screen

  11. Sandtitz Silver badge

    PCID implementation in Linux?

    "If there's a bright side to all this, it's that the PCID feature in Intel's x86-64 chips since 2010"

    "PCID first saw Linux support in the 4.14 kernel released in November 2017"

    If the Reg article is correct, why on did this tech take 7 years to be included in Kernel? Was PCID an unnecessary CPU feature until Meltdown was discovered?

    1. Destroy All Monsters Silver badge

      Re: PCID implementation in Linux?

      We need to ask on the kernel developer list...

      1. rob_leady

        We need to ask on the kernel developer list...

        The kernel mailing list archives seem to be notably offline for the last couple of days...

        https://lkml.org/lkml is currently giving a Cloudflare error.

    2. Drew 11

      Re: PCID implementation in Linux?

      Debian Stable is on 4.9.0, I believe, so no PCID support for you!

      Next!

      1. GreenReaper

        Re: PCID implementation in Linux?

        It's really 4.9.x. They'll incorporate security any security changes as necessary. In addition, you can use kernels from backports - I just upgraded all our Debain Stable/Stretch machines to 4.14.13.

  12. Michael H.F. Wilkinson Silver badge

    What about device drivers?

    I have heard of device drivers for certain cameras working under ASCOM being borked by the fixes to WIN 10, quite apart from any performance hit incurred by what can be very I/O intensive work during capture and processing of astronomical images. Are there other instances of device drivers failing?

  13. Tinslave_the_Barelegged Silver badge

    Filesystem choice?

    > terrible performance on the test system with zfs+compression+lustre,

    One aspect I have been wondering about, regarding apparent performance hits, is whether the choice of filesystem has any bearing on performance loss. Back around 2003, someone produced a set of benchmarks on various Linux filesystems, and one of the criteria was processor impact. If memory serves, jfs, followed by xfs, was especially light on processor cycles, while one would imagine zfs or btrfs would be fairly heavy on the processor. So I wonder if there is variation on performance of the patches depending on filesystem choice.

    I suppose it is very early days on this issue, though, and those with most at stake will be testing these variables.

    1. This post has been deleted by its author

    2. Bronek Kozicki Silver badge

      Re: Filesystem choice?

      Good question. My take is that some filesystems (notably ZFS) make heavy use of the memory, which is fine if user space and kernel share the address space (little impact on cache) but pretty bad if cache needs to be cleared on every disk IO (only if the only cache in question is page translation tables).

  14. ForthIsNotDead Silver badge

    We'll (hopefully) see various popular libraries and systems be re-visited and their code-bases improved WRT performance.

  15. Charles 9 Silver badge

    Perhaps there needs to be a serious look into reducing the speed penalties associated with context switching: either by making the switches faster or by reducing the need for them by carefully moving more things into Userland.

    1. BinkyTheMagicPaperclip Silver badge

      This is what most operating systems have been doing for years.. code that can is moved to userland where an error won't take down the whole system, some speed critical paths might be contained in the kernel.

      There's a way to make switches faster/have less need - fix the CPUs...

    2. Anonymous Coward
      Anonymous Coward

      Re: speed penalties associated with context switching

      "a serious look into reducing the speed penalties associated with context switching ... by reducing the need for them by carefully moving more things into [the same shared space]."

      History records that when faced with such a problem a couple of decades ago - context switches in a secure protected environment were slowing things down too much vs context switches in an less protected environment - Emperor Gates improved the benchmark performance of Windows NT vs Win98 on the same hardware by moving more stuff into kernel mode on NT.

      By moving stuff into kernel code that had no need or right to be in kernel code, he looked good in media coverage benchmarking W98 against WNT, and he also laid the path for NT to be less robust (and less productive) than it needed to be.

      The risk was pointed out at the time in a few places, but it's taken a while for the loss in robustness and productivity to be noticed by the same people whose performance-related clamourings made it happen in the first place.

      1. Charles 9 Silver badge

        Re: speed penalties associated with context switching

        Been reading up on it. The need to reduce context switching is helping to drive a push to move the network interface into userland, much as graphics have been making the transition as well. It makes me wonder if there are certain interfaces that still need to remain in the kernel yet are so frequently accessed as to suffer in terms of context switching.

  16. Anonymous Coward
    Anonymous Coward

    The best way to ensure this doesn't happen again is for all laptop makers and Apple to switch to AMD. Intel would then get their shit together.

    1. Charles 9 Silver badge

      Except AMD is still vulnerable to Spectre, and the fixes for that also induce a penalty (not to mention full solutions aren't ready yet, if ever).

      1. RandSec

        There is Vulnerable and then there is Vulnerable

        AMD EPYC servers have hardware encryption for VM's. AMD seems to be about as vulnerable as Intel with Spectre, but even if Spectre succeeds, what it gets is encrypted data on EPYC, versus plaintext data on Intel.

        1. GreenReaper

          Re: There is Vulnerable and then there is Vulnerable

          Doubt it helps if you compromise the kernel via cache manipulation. Ultimately if it uses the data, it'll have to decrypt it at some point.

  17. ID10TError

    Can anybody be hacked without file execution privileges? If not then the issue is really mute. Sure now Spectre/Meltdown have been patched now I CAN ALLOW ANYBODY TO EXECUTE WHATEVER THEY LIKE ON MY MACHINE. Really if you have to touch it(or have admin privileges(starting an app remotely)) then whats the issue?

    1. sofaspud

      Say you've got yourself a datacenter running multiple VMs for who-knows-how-many clients. How much do you want to bet on all of them being completely perfect in adhering to security protocols?

    2. AdamWill

      yes

      ...ask yourself why browsers are being patched for this. Browsers. How are browsers affected? Keep thinking. Do browsers, perhaps, allow execution of untrusted remote code? Go on, you're nearly there...

  18. Dr Dan Holdsworth Silver badge

    What about SPARC

    Yes, I know SPARC is obsolescent, but plenty of big systems still run on Solaris/SPARC. Is this vulnerable too?

    1. GreenReaper
  19. Anonymous Coward
    Anonymous Coward

    As ever, all fine with Apple...

    https://www.macrumors.com/2018/01/03/intel-design-flaw-fixed-macos-10-13-2/

    1. anonymous boring coward Silver badge

      Re: As ever, all fine with Apple...

      No song and dance, but some actual information in the OSX change logs.

      What a difference to the annoying MS "secret society" bollocks.

  20. Anonymous Coward
    Anonymous Coward

    its all gone tits up

    It appears the RedHat Enterprise Linux 6.x patches break VM's, they fail to reboot after an update and have to be rescued as the latest kernel causes boot failure.

    Had it with IBM softlayer and others.

    Softlayer are advising not to install RedHat Enterprise Linux 6.x updates

  21. Androgynous Cupboard Silver badge

    PCID support

    # grep -q pcid /proc/cpuinfo && echo ok

    ok

    Looking good here, so I'll be sitting tight for the next kernel. This is on a Xeon E2560 but from Intels site, should apply to anyone with an Xeon E3 or later (launched 2011). Would it be a fair guess that anyone doing serious crunching is going to be doing it with a Xeon? I think I'm right that the Core series is limited to a single CPU per motherboard?

    1. GreenReaper
      Linux

      Re: PCID support

      But do you have INVPCID? That's required for reasonable performance, not just PCID.

  22. Anonymous Coward
    Anonymous Coward

    Conspiracy theories?

    I wonder if this is a side effect or is the patch actually crippling Monero mining rigs on purpose?

    If you read the forums, a lot of folks using BTc miners found that updating their firmware had a noticeable effect on the hash rate, unfortunately its a requirement as it contains bug fixes and other patches.

    Without the updates they won't hash at all due to subtle changes in the BTc algorithm.

    As a data point I tried installing the 2018 rollup on my ancient netbook (4GB DDR3, 1.66G dual core) and it did seem slightly slower but haven't yet tried anything processor intensive.

  23. JuJuBalt

    Does this make ZIPing/RARing especially slow?

  24. vnomura

    CPU load graph in article dated December....

    Hi All,

    I thought the patch was released in Jan 2018. Why does the CPU load graph in the article showing the increased CPU load dated Dec 22, 2017. This was before the CPU bug was publicly known.

  25. Anonymous Coward
    Anonymous Coward

    High performance workaround

    Remediating Meltdown – involves enforcing complete separation between user processes' virtual memory spaces and the kernel's virtual memory areas

    If switching contexts is such a performance hit - would a less secure workaround be to leave the bug in place, but to address randomise / relocate / encode / encrypt objects in the kernel's virtual address space at a rate fast enough to outpace the rate at which the memory can be read via this exploit? Perhaps this would work better if the kernel's "virtual" memory was forced to remain in main memory?

    1. Anonymous Coward
      Anonymous Coward

      Re: High performance workaround

      As I understand it, relocating the virtual address space is costly (thus why it's only done once), meaning you can't do it quickly enough without killing your performance.

  26. MrT

    Are the effects cumulative...?

    Just curious about the current impact being specific to individual issues here (the 2 for Spectre and 1 for Meltdown).

    At the moment, people are talking about it as if it's always going to be one fix for all. Is it possible to eventually just patch one aspect and not be hit as hard?

    In a brief moment of insanity, I wondered if that's what the Intel "mitigated over time" might mean. Instead of just thinking it means "customers will just buy replacement kit", which is probably a lot nearer the actual meaning.

  27. Ian Joyner

    Catalyst for Industry Rethink

    This is big enough to start an industry rethink. For too long we have given way to the performance needs of scientific processing while ignoring the issues that are faced by the rest of computing - that is security and software correctness, robustness, and reliability. Perhaps computer science courses are responsible for this, since you can objectively measure performance, but the many other factors you can't. So we have a generation of programmers and hardware engineers thinking about the wrong issues.

    Scientific programming is actually quite simple, but compute intensive and complex in ideas. These ideas though are succinctly expressed in a few equations. But you have a simple program that runs for ages to get a result (it could be argued this is a simplification). But generally scientific programs are well specified to satisfy a particular goal.

    Other computing by comparison is simple, yet the development results in complex and large programs because the goals are much more difficult to define.

    I think they got it right in the 1950s to separate COBOL and FORTRAN. I don't like this separation but it seems to have become a fundamental fact because of the very different goals. But in processor design we have tried to merge the two. Security and correctness checks will slow a processor down, so they have been ignored. The RISC vs CISC debate is also where we can see the division, although you can use RISC and CISC for both scientific and the rest (I hesitate to call it business computing these days).

    But security must be baked in at as low a level as possible. You really cannot get around that and should not do so. But that is what has been done – security and correctness have been sacrificed for performance. Performance isn't bad – yes give me more of it, but don't sacrifice other crucial issues.

    If you don't put security at lower levels, it must be done at higher levels which will cost far more in terms of processor cycles for something which is not as accurate in terms of being implemented in weak heuristics (guesses) rather than strong rules. If these guesses miss, we can get false positives requiring wasted human interaction, or miss a real attack which can end up costing a lot in terms of money and human time. Virus-checking software checks for software that might do a buffer overflow, rather than directly blocking buffer overflows and out-of-bounds access.

    We urgently need processors that do bounds checking and other security checks. Security should not be left to MMUs (MMUs themselves seem like an afterthought to provide virtual memory). Yes to do this might require a decade long effort.

    In operating systems, we need to get back to microkernels such as Andrew Tanenbaum's Minix that has just a few thousand lines that run in kernel mode, rather than large monolithic kernels that run everything in kernel mode. Maybe revisit MUTLICS and Burroughs B5000 both in architecture and operating system. Systems should be designed as a whole, rather than just a CPU – this should be done by software and security experts, not electronic engineers. We now know that concepts such as virtual memory and security are essential to computing, not to be treated as afterthoughts.

    Of course changes to architectures will break a lot of C programs out there – but for the better. We also need to address the issue of systems programming languages as well. C (and C++) have been too weak in terms of security and correctness for far too long, and the industry has got away with this dire situation – up until now that is. Systems programming languages should only be used for operating systems and not extended to applications programming.

    1. Anonymous Coward
      Anonymous Coward

      Re: Catalyst for Industry Rethink

      There are many security features in x86 CPU that went unused because they slow down applications.

      There are four rings, but only two are used. Thirty years ago Intel suggested to run the core kernel at ring 0, kernel I/O at ring 1, OS libraries at ring 2, and user applications at ring 3. It would have meant a lot of ring transitions, and thereby slower performance. Nobody did it, also because all other CPUs had only supervisor/user modes.

      Using separate segments for code and data would make code not modifiable (and even unreadable but by the CPU while executing it - goodbye, ROP), while data segments could be read only or read/write, while not executable. Of course, you would not be able to access memory outside the segment boundaries. AMD removed segments in x86-64 because nobody used them - everybody just created huge segments encompassing all the virtual address space, and then used pagination to map memory into them.

      A BOUND instruction to check array access was introduced in the 80286. Just it raised an interrupt so it was awkward to use.

      OS and application designer took many shortcuts to improve performance. The flat memory model is simple, fast, but insecure.

      CPU designer should improve the speed of privilege transitions, instead of adding lots of instructions to speed up cat videos display.

      1. Ian Joyner

        Re: Catalyst for Industry Rethink

        "There are many security features in x86 CPU that went unused because they slow down applications..."

        Not really sure what you are saying here. I'll make a few comments anyway.

        Features like bounds checking should be orthogonal to instruction set. That is no bounds checking instructions should be added – all instructions should be checked (where necessary). Trusting programmers or compilers to include bounds checking would be naive. That is why I say such things should be built into the lowest hardware levels. They will also be much more efficient there.

        An OS and physical view might look at memory as a flat array. But an application will take a structured view where things are entities, objects, etc. It is mapping this structured view onto the flat view that is important and making sure you can't flow from one structure (object) into another. Objects should be encapsulated.

        Yes, CPU designers should be interested in a minimal instruction set. My suggestion of CPU (and system) redesign is certainly not to go mad with instructions for every situation - that would be a backward step.

      2. AndersBreiner

        Re: Catalyst for Industry Rethink

        "

        Using separate segments for code and data would make code not modifiable (and even unreadable but by the CPU while executing it - goodbye, ROP), while data segments could be read only or read/write, while not executable. Of course, you would not be able to access memory outside the segment boundaries. AMD removed segments in x86-64 because nobody used them - everybody just created huge segments encompassing all the virtual address space, and then used pagination to map memory into them.

        "

        You can do that with the NX bit. Look at the flowchart for all the checks that need to be done on a far jmp or segment load in the 386 manual. It's impossible to make that algorithm run fast. NX is just a bit in the page table entry.

        And for blocking Return Oriented Programming exploits there's a better option than segments -

        https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf

        "

        The ENDBRANCH (see Section 7 for details) is a new instruction that is used to mark valid indirect call/jmp targets in the program. This instruction opcode is selected to be one that is a NOP on legacy machines such that programs compiled with ENDBRANCH new instruction continue to function on old machines without the CET enforcement. On processors that support CET the ENDBRANCH is still a NOP and is primarily used as a marker instruction by the in-order part of the processor pipeline to detect control flow violations. The CPU implements a state machine that tracks indirect jmp and call instructions. When one of these instructions is seen, the state machine moves from IDLE to WAIT_FOR_ENDBRANCH state. In WAIT_FOR_ENDBRANCH state the next instruction in the program stream must be an ENDBRANCH. If an ENDBRANCH is not seen the processor causes a control protection fault else the state machine moves back to IDLE state.

        "

        Trying to roll back to using 286 and 386 era stuff like segment limits is a bad idea.

    2. Anonymous Coward
      Anonymous Coward

      Re: Catalyst for Industry Rethink

      "Of course changes to architectures will break a lot of C programs out there – but for the better."

      No, because people still want results yesterday. They would rather have it fast than have it right, because a right answer after the deadline is worse than useless. That's why processor designs have mostly kept to speed except for those specialized architectures (that include things like tagged memory) demanded by those few markets where security means more in terms of money than speed.

  28. Miss_X2m1

    Was it a flaw?

    Why does everyone imagine this was a flaw in the chips? I think it was a deliberate backdoor built into the chips for governments but the information about the backdoor is now out in the wild so they have to close the backdoor. I can't image this is some type of flaw....nope.

  29. AndersBreiner

    Oh dear. Both my 2012 Macbook Pro and my ancient WIndows laptop both have pre Haswell CPUs and hence no PCID.

    Then again the Macbook is running Yosemite which doesn't get an update anway. I've been putting off an upgrade to High Sierra because I'm worried it will run like crap.

    The Mac does have 16GB of ram and and 1TB SSD so it would probably have been OK with HS. Now I'm not so sure. And if I want to buy a new Mac I'd need to spend $1899 to get the same amount of (non user upgradeable) Ram and half the amount of (user upgradeable but proprietary) SSD space.

    On the other hand Google are going to patch Chrome to reduce the resolution on the timer which makes side channel timing attacks like Meltdown hard or impossible. And Microsoft patched both and old and new OSs.

    tl;dr - Intel and Apple 0, Microsoft and Google 1.

  30. Lion

    Buying a refurbished PC anyone? There are going to be thousands of business grade PCs being replaced over the next two years due to the enterprise migration to W10. . Whereas in the past it has always been a nice place to scoop up a nice system for a decent price, the Meltdown and Spectre vulnerability is going to render these systems (especially those Intel and more than 5 years old), as 'toxic waste'.

  31. Anonymous Coward
    Anonymous Coward

    Scada systems

    Whilst the impact of Meltdown and Spectre on IT infrastructure systems may be significant in terms of reduced redandancy/performance it is not going to kill anyone. However the same implications upon PLC based control system's reduced response times potentially could.

    So far, I would say, PLC manufacturers have been slow in publishing risk assessements, one would have thought that after Stuxnet, Siemens for example, would have learned from their mistakes and hurried to release something to dismiss ideas that Meltdown is not a more apt name than it is at present.

    If they have I cannot find it

  32. francis.mondia.et
    FAIL

    Red Hat Patches Does Not Contain Latest Microcode Update

    Has anyone tried patching their RHEL 7 system with the latest updates? It includes the patch for these but sadly it doesn't fix variant 2 of Spectre. Verified this on a fresh RHEL 7 install and updated to the latest patches via Red Hat Subscription Manager yet running their detection script still tells you its vulnerable.

    On a more technical note, the culprit seems to be that the microcode update does not yet contain the latest one for the processor we are using, as the microcode version is still 0x700000d, dated 2016-10-12. The microcode for this processor though is already available from Intel, as we have manually applied it already on other systems.

    Reported this to RH and hope they release updated patches soon.

    1. GreenReaper

      Re: Red Hat Patches Does Not Contain Latest Microcode Update

      Maybe they removed it because they found it was crashing those systems.

  33. cooloutac

    wow 5th generation boards not getting bios patched. Is the bios patch nescessary for protection from these attacks.

    I guess we have to buy a new pc every year or two like android phones to stay up to date. Security is real expensive. This is a sad future we headed to.

  34. Anonymous Coward
    Anonymous Coward

    Skyfall and Solace vulnerabilities?

    Reports indicate there are two new vulnerabilities - possibly unearthed by UK hosting firm Mythic Beasts ( source: https://react-etc.net/entry/skyfall-and-solace-vulnerabilities ). Any takes on if this is real?

    1. diodesign (Written by Reg staff) Silver badge

      Re: Skyfall and Solace vulnerabilities?

      Mythic Beasts is just the hosting company. And it's basically bollocks. It's 99% a hoax.

      C.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019