back to article Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

A fundamental design flaw in Intel's processor chips has forced a significant redesign of the Linux and Windows kernels to defang the chip-level security bug. Programmers are scrambling to overhaul the open-source Linux kernel's virtual memory system. Meanwhile, Microsoft is expected to publicly introduce the necessary changes …

Page:

  1. aaaa
    Happy

    Refunds and Compensation

    Companies like Apple that offer a 'no questions asked' refund policy are going to be very very busy refunding every Christmas gift with an 'Intel insude'. You think Apple (and other vendors) are just going to take that hit? Intel will be paying compensation to vendors for sure, certainly for every chip shipped in the past 3 months - but more likely 6-12 months since this will affect the pipeline and inventory too. Who's going to buy anything with 'Intel inside' unless the vendor can guarantee that it's new silicon?

    Consumer law will also come into play as Aqua Marina detailed in "I wonder where we stand legally now?" (above).

    But the really interesting thing will be whether companies like HPE go to bat for their enterprise support customers. Because that'll be a killer whitebox shakedown. i.e.: 'I bought HPE and they replaced my server CPU' and 'I bought a whitebox and now it runs 30% slower and I've got no recourse'. It's little cost to HPE and a marketing windfall - they just have to jump on the cueball-intel bandwagon.

    This is going to be good fun to watch.

    1. JLV
      Trollface

      Re: Refunds and Compensation

      Wow, this seems like a bad one. The PII floating point instruction was, IIRC, somewhat of an unlikely bug to happen, which is why it took a long time to spot - it really didn't affect most people too often and Intel could probably claim it worked well enough for most users. In any case, quoting Wikipedia

      On December 20, 1994, Intel offered to replace all flawed Pentium processors on the basis of request, in response to mounting public pressure.[5] Although it turned out that only a small fraction of Pentium owners bothered to get their chips replaced, the financial impact on the company was significant.

      A really nasty security exploit doesn't have that unlikely-ness protection - every black hat is going to have at it.

      If Intel did the decent thing and replaced the CPUs (very unlikely) to their OEMs and assuming warranty/fit-for-purpose protections do their job and force vendors to make good (equally unlikely)...

      then Apple may suddenly rue their tendency to solder things everywhere. Ditto for the Surface and its well-publicized glue-it-all.

      1. Adam 1

        Re: Refunds and Compensation

        One of the compilers that I use still has an option for Pentium safe floating point division. A lot of people didn't bother swapping them over I guess because OS and compiler vendors whacked together a quick work around and pulling out a CPU is beyond the technical knowledge of most.

        Either way, I'm not looking forward to this patch. No customer is going to call support and say "hey there, because of Intel's screw up we're not getting adequate performance". It's going to be a bunch of your product is a bunch of ... Fix it yesterday.

      2. PNGuinn
        Trollface

        Re: Refunds and Compensation @JLV

        PSSST!

        Q. How many Intel Enjineers does it take to change a lightbulb?

        A. None. Someone broke in and stole the new lightbulb.

        Go on - you know you want to ...

    2. Anonymous Coward
      Anonymous Coward

      Re: Refunds and Compensation

      Apple would be able to force Intel to provide replacement CPUs to them to deliver into purchases over the last x days/months, thanks to their implied threat to switch to AMD or their own SoC. Maybe even pay the costs of the recall.

      Intel will just ignore the PC OEMs, because they can't make changes overnight to switch and will forget about it by the time it could happen when Intel throws some marketing dollars their way.

    3. Flocke Kroes Silver badge

      Re: Refunds and Compensation

      Take a look at what happened with the memory translation hub. Intel will watch calmly while the smaller vendors get sucked into the wood chipper. Even if the big distributors get free replacement chips from Intel, the cost of distribution and installation will land on the distributors. Anyone - big or small - who soldered Intel CPUs to the PCB is in for a world of hurt.

    4. Nick Ryan Silver badge

      Re: Refunds and Compensation

      Given that when Cisco was royally screwed over by Intel's Atom issues and subsequently has just passed the costs onto buyers of their kit, what do you think will happen with this latest Intel failure? Pretty much the same and I'm pretty sure that Intel's contracts (hidden under several miles of NDA) will disclaim any responsibility for anything.

  2. Down not across

    10 years?!

    It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the contents of protected kernel memory.

    That's taken long time to surface then.

    Guess it has been quite useful for 5-eyes.

  3. Herby

    Genetic Diversity?

    Given that a major share of the CPUs out there are from one vendor, this is what you get. A hardware bug that permeates over several chips. Nice to see that AMD (minority report) doesn't have the problem.

    Maybe Chipzilla Intel is too big and needs to be sliced and diced.

    Thought experiment: What would Intel be if IBM had picked a different processor for its PC back in 1981 (Motorola 68000?)?

    1. Pascal Monett Silver badge

      Re: Genetic Diversity?

      And that would change what, exactly ?

      We'd be griping about a bug in Motorola processors instead. Whoop-de-doo.

    2. Trilkhai

      Re: Genetic Diversity?

      >What would Intel be if IBM had picked a different processor for its PC back in 1981 (Motorola 68000?)?

      I'm guessing Intel would be a footnote in that case… Question is, how would the market have turned out if IBM had chosen Motorola's 68000 or Texas Instruments' 16-bit TMS9900?

      1. Dan 55 Silver badge

        Re: Genetic Diversity?

        There'd have been Amigas and Macs everywhere as software would have been easy to port and nobody would have bought a PC if they could have chosen one of the other two machines.

        Also Microsoft wouldn't exist as they only got where they were today by making quick and dirty ports onto x86 and Windows.

        It'd have been brilliant.

        1. Destroy All Monsters Silver badge
          Paris Hilton

          Re: Genetic Diversity?

          Nah.

          National Semiconductor's 320X CPUs were pretty nice. And if Moto had finally succeeded in doing the 880000, who knows.

      2. Robert Sneddon

        68000 versus 8086

        The 68000 wasn't ready for production when IBM were looking for a CPU for the desktop PC (Back in the early 80s I played around with a dev board where the 68k was clocked down to 4MHz, half its rated speed). Motorola also didn't have the support chips needed to build a complete system so everything like interrupt controllers, floppy disc controllers, UARTs, graphics chip drivers etc. would have to be implemented with lots of TTL.

        The 8086 (and the version IBM actually used, the 8-bit-bus 8088) was designed to use existing 8080-series support chips, being bus-compatible with the older device. In addition its internal registers and addressing modes were also backwards-compatible so migrating existing programs from 8080-series CP/M versions was piss-easy. The 68000's "clean sheet" instruction set and internal register structure meant everything would have to be rewritten from scratch, especially boot code, low-level device drivers and kernel code.

        1. Peter Gathercole Silver badge

          Re: 68000 versus 8086 @Robbert Sneddon

          Whilst I don't have the experience of the 68000 that you obviously had, I believe that there were a significant number of support chips from the 680X 8-bit family that worked with the 68000.

          I'm trying to think back, but I'm sure I saw a working 68000 system around the time that the IBM PC was new. Of course, that could have been because small companies were more agile than IBM, but I think that the IBM PC was a very quick development which didn't start until 1980, a year after the 68000 was released.

          No, I think that the reason why IBM went with Intel was mainly cost.

          If the 68000 had been chosen for the single-tasking, floppy disk only original IBM PC, I think that we would have had multi-tasking desktop systems much earlier, because the 68000 was designed as a 32 bit family of processors from the outset, rather than being an 8/16 bit kludge of a processor that the 8088 and 8086 processors were (and became worse still with 32 bit and 64 bit evolutions)

          1. Phil O'Sophical Silver badge

            Re: 68000 versus 8086 @Robbert Sneddon

            Whilst I don't have the experience of the 68000 that you obviously had, I believe that there were a significant number of support chips from the 680X 8-bit family that worked with the 68000.

            There were indeed, the 68K was designed to be compatible with them, and would switch to a synchronous bus cycle when it detected a 68xx peripheral. I remember using 68xx-series UARTs with them, as well as the 68230 parallel/timer chip.

            I'm trying to think back, but I'm sure I saw a working 68000 system around the time that the IBM PC was new.

            The Sun-1 would have come out around the same time, with the Sun-2 and Sun-3 roughly aligning with the PC-AT and PC-XT286s.

          2. jelabarre59

            Re: 68000 versus 8086 @Robbert Sneddon

            No, I think that the reason why IBM went with Intel was mainly cost.

            I had been to a talk some years back (probably like 25 years ago) given by one of the original IBM-PC designers. The story I heard there was even more ironic, given the situation these days. IBM had been concerned at the time about having multiple sources for all the components. At the time the 68000 was only sourced from one location, while Intel had licensed the 808x processors to multiple manufacturers.

            Ironic in that Intel has long since decided they wanted to be the only manufacturer of their chips. Had the manufacturing situation back then been like it is now, IBM would have gone with the ARM (OK, the equivalent would probably have been the Z80).

        2. Stoneshop

          Re: 68000 versus 8086

          Motorola also didn't have the support chips needed to build a complete system so everything like interrupt controllers, floppy disc controllers, UARTs, graphics chip drivers etc. would have to be implemented with lots of TTL.

          Bollocks. With peripherals there are some support chips you actually want from that particular CPU family to make life easier, but as long as those UARTs, FDCs, video controllers etcetera use the same signal levels and are roughly compatible with regards to clock speed, then you can just mix and match with maybe a bit of address select and glue logic. And custom chips to do the heavy lifting regarding tacking all those things together were quite common back then already

          Look at the Acorn BBC B: 6502 CPU. 6845 CRT controller. 6850 ACIA (UART). 6522 VIA. Two custom ULAs, which replace several tens of 74xx logic gates each. uPD7002 ADC. 8271 FDC.

          And the IBM PC? It was actually using that same Motorola 6845 CRTC for its MDA and CGA interfaces.

          1. Robert Sneddon

            Re: 68000 versus 8086

            ULAs weren't available in the early 1980s hence the IBM's use of lots of TTL to get it to work at all. Yes the 68000 eventually got a number of dedicated support chips but that was a long time after the original IBM PC was in production. I think the 68008 (the cut-down 8-bit-bus version of the full-sized 68k) which any 68k-based IBM desktop would have used was also late.

            The 68000's bus was asynchronous, relying on a data-available strobe from each peripheral and memory controller which was tricky to make work with regular clocked peripherals. It had advantages, it made mixing slow and fast devices on the CPU bus easy but it didn't work easily with the simpler existing chip families, not even the Motorola 8-bit designs like the 6800. It could be bodged to do so (I designed circuitry to do just that back in the day) but it took extra glue logic and wasn't elegant.

            IBM had to go with what they could buy in predictable quantity numbers at a decent price that would do the job and the 68000 just wasn't there when the door closed.

            1. Stoneshop

              Re: 68000 versus 8086

              ULAs weren't available in the early 1980s hence the IBM's use of lots of TTL to get it to work at all.

              So the ZX81 and BBC B were built with technology that didn't exist? Interesting.

              Now if IBM decided to just use standard components, that's another matter. But ULAs and other MSI-level custom chips were definitely available at the time the PC was designed.

              1. Peter Gathercole Silver badge

                Re: 68000 versus 8086

                ULAs (Uncommitted Logic Arrays) were a UK innovation, mainly designed by Ferranti.

                They allowed some of the layers of the wafers to be a common design, with the last few acting as a customization to get the chip to do what was needed. You could think of them as a half-way house to an FPGA, but with the configuration baked into the last few layers of silicon rather than after manufacturer.

                I don't believe that any US company really bought into using ULAs, but they were used, as already pointed out, for the ZX81, BBC Micro, ZX Spectrum and Acorn Electron to reduce the chip count.

                But production problems with ULAs were one of the main reasons why several of these systems were delivered late. Ferranti eventually disappeared into Marconi, which was sold off when that company went bust, so the technology disappeared.

    3. alain williams Silver badge

      Re: Genetic Diversity?

      Thought experiment: What would Intel be if IBM had picked a different processor for its PC back in 1981 (Motorola 68000?)?

      Oooh - that would have been nice. An unsegmented 32 bit memory space, unlike the intel 16 bit, then memory segments, ... Although the Intel CPUs came with a defined MMU (Memory Management Unit) which the M68k chips did not; I seem to remember 3 different sorts of M68k MMUs floating around.

      If IBM had gone M68k, would Bill Gates have been able to get in on the act ? We might be living in a very different world in which Amiga was the big boy.

      1. Brian Miller

        Re: Genetic Diversity?

        Of course Bill Gates would have gotten in on the IBM PC if it had used the 68000. The problem was with Digital Research not signing an NDA. So Gates would have still made a DOS for IBM.

        1. Doctor Syntax Silver badge

          Re: Genetic Diversity?

          "So Gates would have still made a DOS for IBM."

          Not "still made". Just "made" instead of "bought".

      2. Nick Ryan Silver badge

        Re: Genetic Diversity?

        Well we'd have a much nicer assembly language to deal with. I still cry on the, admittedly now very rare, occasions that I have to drop down to x86 assembly level debugging and suffer the brain ache of an architecture that produces code that often seems to spend more time swapping values between limited registers than doing anything overly useful.

    4. Doctor Syntax Silver badge

      Re: Genetic Diversity?

      "What would Intel be if IBM had picked a different processor for its PC back in 1981 (Motorola 68000?)?"

      Z8000?

    5. Dave 13

      Re: Genetic Diversity?

      Monoculture os a knife with no handle, incredibly sharp, but prone to the occasional loss of a finger.

  4. Paratrooping Parrot
    Mushroom

    List of CPUs affected?

    Is there a list of CPUs affected? I have an old Arrandale based Core i3 laptop. Would that be affected? Do we have to wait for the embargo to be lifted? Will Windows 7 be updated as well?

    1. TonyHoyle

      Re: List of CPUs affected?

      Presently it's assumed to be all intel CPUs, with newer ones (<2 years) having extra instructions that drop the hit on benchmarks to 'only' 30%.

      Windows 7 is on extended support, so should get a patch, but that's up to microsoft.

  5. Anonymous Coward
    Anonymous Coward

    So mcuh bullshit, so little time

    Remember the P67 Chipset. I laid down big $$ for a i7 2600k and had to yank my intel mobo and send it back to Newegg then I had to relicense WinDoze on the replacement board they sent me.

    The P67 failure and complete recall details were always reported sketchy but allegedly the speed of the chipset degraded 6% over two years.

  6. Anonymous Coward
    Anonymous Coward

    Remember the P67 Chipset fiasco/recall. I plunked down $$$for an I7 2600k and had to yank the mobo and send it back for a new one, then deal with microshaft for another Windows 7 license. Details released on the design flaw of the P67 were sketchy but the fact intel stood by was the recall wwas due to the fact that the speed of the chipset reuce 6% of 2 years.

    Never got even an apology from anyone.

  7. DXMage

    NSA specific bug?

    I have to wonder if this wasn't something that the NSA insisted get put in under the whole umbrella of "National Security".

    1. tim292stro

      Re: NSA specific bug?

      > I have to wonder if this wasn't something that the NSA insisted get put in under the whole umbrella of "National Security".

      No, No.... you don't get it. The NSA doesn't need to "put things in". The Clipper Chip demonstrated to them that if they applied themselves a bit like they do with TEMPEST, they could find the stuff idiots put in themselves, and just get back to work reading things plain text. As "idiots" who didn't work for the NSA were able to crack the protections. This demonstrated that their adversary was more technically capable than they expected themselves to have to be.

      In the computer world companies are pretty well matched on what they can do within a given instruction set architecture, so they have to find ways to "cheat" more performance out of a given ISA in order to get their competition over a barrel. There is an engineering axiom "You don't get ANYTHING for NOTHING". What we have here is the problem being improperly constrained from an engineering standpoint - someone said: "Let's get more performance without using more power or area on the die." Nobody said: "Let's get more performance without using more power or area on the die, and without impacting security".

      It's a common theme seen all over tech that security is considered an after-the-fact-add-on, rather than integral to the design. The NSA knows that, and they can demand the full documented ISA from Intel and we'd never know it - the government are also not obligated to inform Intel of any problems found. Kind of like the FBI doesn't need to tell Apple how they cracked those phones after the terrorist shooting. They can just quietly go about their jobs and no one needs to be the wiser (one of the reasons I wish the FBI would just shut up about the encryption back-door thing). I think these agencies talk too much for their own good.

  8. CheesyTheClown

    Counting chickens?

    First, this is news and while I don’t buy into the whole fake news thing, I do buy into fantastic headlines without proper information to back it up.

    There are some oddities here I’m not comfortable with. The information in this article appears to make a point of it being of greatest impact to cloud virtualization, though the writing is so convoluted, I can’t be positive about this.

    I can’t tell whether this is an issue that will actually impact consumer level usage. I also can’t tell whether there would actually be 30% performance hit or whether there would be something more like 1% except in special circumstance. The headline is a little too fantastic and it reminds me of people talking about how much weight they lost... and the include taking off their shoes and wet jacket.

    Everyone is jumping to conclusions that AMD or Intel is better at whatever. Bugs happen.

    Someone claims that the Linux and Windows kernels are being rewritten to execute all syscalls in user space. This is generally crap. This sounds like one of Linus’s rants about to go haywire. Something about screwing things up for the sake of security as opposed to making a real fix.

    Keep in mind, syscalls have to go through the kernel. If a malformed syscall is responsible for the memory corruption, making a syscall in another user thread will probably not help anything as the damage will be done when crossing threads via the syscall interface.

    Very little software is so heavily dependent on syscalls. Yes, there is big I/O things, but we’re not discussing the cost of running syscalls, we’re talking about the call cost itself. Most developers don’t spend time in dtrace or similar profiling syscalls since we don’t pound the syscall interface that heavily to begin with.

    Until we have details, we’re counting chickens before they’ve hatched. And honestly, I’d guess that outside of multi-tenant environments, this is a non-issue otherwise Apple would be rushing to rewrite as well.

    In multitannant environments, there are 3 generations Intel needs to be concerned with.

    Xeon E5 - v1 and v2

    Xeon E5 - v3 and v4

    Xeon configurable

    If necessary, Intel could produce 3 models of high end parts with fixes enmass and insurance will cover the cost.

    Companies like Amazon, Microsoft and Google, may have a million systems each running this stuff could experience issues, but in reality, in PaaS, automated code review can catch exploits before they become a problem. In FaaS, this is not an issue. In SaaS this is not an issue. Only IaaS is a problem and while Amazon, Google and Microsoft have big numbers of IaaS systems, they can drop performance without the customer noticing, scale-out, then upgrade servers and consolidate. Swapping CPUs doesn’t require rocket scientists and in the case of OpenCompute or Google cookie sheet servers shouldn’t take more than 5 minutes per server. And to be fair, probably 25% of the servers are generally due for upgrades each year anyway.

    I think Intel is handling this well so far. They have insurance plans in place to handle these issues and although general operating practice is to wait for a class action suit and settle it in a fashion that pays a lawyer $100 million and gives $5 coupons to anyone who fills out a 30 page form, Amazon, Google and Microsoft have deals in place with Intel which say “Treat us nice or we’ll build our next batch of servers on AMD or Qualcomm”.

    I’d say I’m more likely to be effected by the lunar eclipse in New Zealand than this... and I’m in Norway.

    Let’s wait for details before making a big deal. For people who remember the Intel floating point bug, it was a huge deal!!! So huge that after some software patches came out, there must have been at least 50 people world wide who actually suffered from it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Counting chickens?

      The KPTI patches force all syscalls to go through a translation so the kernel gets mapped in/out of memory for each call. This is expensive, and up till now would have been called pointless.. there's a reason Linus wanted to call it 'Fuckwit'.

      Do you imagine Linus would have let this in if it was as low impact as the floating point bug? He'd have had one of his famous rants and told them to go away and think again. Instead he basically fast tracked it - even into the stable kernel which is a major deal in itself. Microsoft have done the same..

      1. Dwarf

        Re: Counting chickens?

        Talking about chickens, don’t processors have an array of “chicken bits” to allow chunks of functionality to be disabled, I wonder if those would help resolve the issue at a lower cost than the rumoured 30% performance hit

      2. rob_leady
        Linux

        Re: Counting chickens?

        Reading the comments on the kernel mailing list, it doesn't appear that it was just Linus who wanted to call it Fuckwit, if at all...

        https://lkml.org/lkml/2017/12/4/709

        2) Namespace

        Several people including Linus requested to change the KAISER name.

        We came up with a list of technically correct acronyms:

        User Address Space Separation, prefix uass_

        Forcefully Unmap Complete Kernel With Interrupt Trampolines, prefix fuckwit_

        but we are politically correct people so we settled for

        Kernel Page Table Isolation, prefix kpti_

        Linus, your call :)

        https://lkml.org/lkml/2017/12/4/758

        On Mon, Dec 4, 2017 at 6:07 AM, Thomas Gleixner <tglx@linutronix.de> wrote:

        >

        > Kernel Page Table Isolation, prefix kpti_

        >

        > Linus, your call :)

        I think you probably chose the right name here. The alternatives sound

        intriguing, but probably not the right thing to do.

        Linus

      3. stephanh

        Re: Counting chickens?

        Amen. A 5%-30% performance impact, with major changes to fundamental kernel operation? And no epic Linus rant on even the merest suggestion to merge this, but rather fast-tracked and back-ported to a stable kernel version?

        Let's face it: hell has indeed just frozen over.

        1. Dan 55 Silver badge

          Re: Counting chickens?

          Don't worry, Hell has thawed, Linus has just ranted.

          1. Intractable Potsherd

            Re: Counting chickens? @Dan 55

            Not really a rant, though, is it - more of a "WTF?"

          2. anonymous boring coward Silver badge

            Re: Counting chickens?

            I think most of us know that Linus has little time for idiots and PR, and being PK isn't very high on his agenda either. More power to him!

    2. tim292stro

      Re: Counting chickens?

      @CheesyTheClown

      > The information in this article appears to make a point of it being of greatest impact to cloud virtualization, though the writing is so convoluted, I can’t be positive about this.

      Imagine you are running your company's data sharing with manufacturing in a cloud hosted server on shared hardware. This bug basically means that if any other service run in user-space on the same shared hardware had the code required to poke at the kernel, it could bypass ALL virtualization boundaries and take ownership of the whole platform at Priv-0 level. Essentially, if this bug is not quashed and RAPIDLY, the entire virtualization market on these Intel platforms is at risk - as well as the sanctity ad security of the data currently trusted to this market.

      > Keep in mind, syscalls have to go through the kernel...

      Yeah, and now imagine that you have to stop everything that needs an interrupt so the kernel can lock down and handle kernel level operations while the rest of the user-space tasks sit there and twiddle their thumbs... every time this happens. That's potentially a 5 millisecond hit every 15 milliseconds, and that's where the potential of the performance impact lies. Systems that have a kernel-level VM handler running at Priv-0, will have to UNLOAD anything belonging to a less privileged worker so that it can flush the speculative cache, then handle the kernel task and flush again to continue with the VM guest tasks.

      There are some systems that will have a much worse impact than others, for example machines that run over-provisioned guest VMs that need to share a common resource pool will be impacted more during the VM switching (reduces the value of each VM host), machines that run VoIP bridges have a 125uSec interrupt for analog sample-to-packet timing. Machines that do anything with a physical serial port will trip interrupts constantly.

      > I think Intel is handling this well so far. They have insurance plans in place to handle these issues and although general operating practice is to wait for a class action suit and settle it in a fashion that pays a lawyer $100 million and gives $5 coupons to anyone who fills out a 30 page form, Amazon, Google and Microsoft have deals in place with Intel which say “Treat us nice or we’ll build our next batch of servers on AMD or Qualcomm”.

      Well you may be bitter and think that your buying dollar (or Krone), doesn't provides any power anymore, but the truth couldn't be farther from that. Yeah, so you're aware that Intel will slime their way out of it, that has a PR cost. Yes AMD has CVEs, but I can't recall them having a Pentium 90 math coprocessor issue like Intel, a SATA failure like SandyBridge, a Floating Point bug that needs to be fixed in SW (by third parties), a Management Engine that can't be turned off that leaves systems exposed unless you stop feeding the system power, and now this cache accelerator bug that can adjust performance numbers down to 0.66% of the advertised specs under the only SW fix available (again done by third parties). I'm also aware that the tempo of fairly embarrassing problems are increasing, so if I were a person building a system and saw an increase in the level of ineptitude from a multi-national company, and they only left me with a stack of paper to fill out for my $50 and a still broken POS system that 30% slower than the day I bought it, I'd be so jaded I wouldn't buy their crap any more (and I wouldn't be alone).

      If I worked for the Intel PR team in the EU, the first thing I would have thought when reading this article is "uff da..." Even Apple can't get away with a known design flow affecting the product near the end of the design life - see their battery fiasco as of late. Allowing your lawyers and your insurance to cover your screw ups only works a few times... Personally I foresee an investor meeting where someone's head is going to need to be offered as a result of the stock price hit.

      1. david 12 Silver badge

        some systems that will have a much worse impact

        >There are some systems that will have a much worse impact than others, for example machines that run over-provisioned guest VMs that need to <

        Hmmm. Does this mean it will have no impact at all on my WinXP virtual macines on ESXi 4, because (apart from the fact all components are out of support), wasn't this context switch on every interupt the reason XP ran so crap on ESXi 4 ?

        1. tim292stro

          Re: some systems that will have a much worse impact

          > Hmmm. Does this mean it will have no impact at all on my WinXP virtual machines on ESXi 4, because (apart from the fact all components are out of support), wasn't this context switch on every interrupt the reason XP ran so crap on ESXi 4?

          If you're running an ESXi that old and an OS that old, it doesn't sound like "patching" is in your world-view, so yeah it wouldn't "impact" you more than knowing your system is more vulnerable today than you knew it was it was yesterday. ;-)

          1. david 12 Silver badge

            Re: some systems that will have a much worse impact

            >so yeah it wouldn't "impact" you more than knowing your system is more vulnerable today than you knew it was it was yesterday. <

            Well, interesting point: if it was this relentless context switching that made XP run slower than Win7, and made XP run like crap on VMWare, it would seem to indicate that XP is immune.

      2. Anonymous Coward
        Anonymous Coward

        Re: Counting chickens?

        "Yes AMD has CVEs, but I can't recall them having a Pentium 90 math coprocessor issue like Intel, a SATA failure like SandyBridge, a Floating Point bug that needs to be fixed in SW (by third parties), a Management Engine that can't be turned off that leaves systems exposed unless you stop feeding the system power, and now this cache accelerator bug that can adjust performance numbers down to 0.66% of the advertised specs under the only SW fix available (again done by third parties)."

        So much this! (PS - I'm sure you meant 66% rather than 0.66%)

    3. Dan 55 Silver badge

      Re: Counting chickens?

      Initial benchmarks say there's an 18% hit on I/O heavy operations on Linux.

      https://hothardware.com/news/intel-cpu-bug-kernel-memory-isolation-linux-windows-macos

      Click through to Phoronix to see more Intel share price graphs.

  9. The Kernal

    no news

    I'll be surprised if this shows up anywhere besides El reg. if it does make the majors here in the states they will trivialize the issue and in the end, all the tech inept populace will ignore the baffling techno jibber-jabber they just heard because it wasn't explained in a catchphrase.

    1. Anonymous Coward
      Anonymous Coward

      Re: no news

      They already have a witty catchphrase: FUCKWIT.

      The networks will have to censor it a bit to pass the priss factor, but with a name like "Fartwit" to toss around the public will be all over it like... well... like Intel on a math error. *Cough*

      1. Dave 126 Silver badge

        Re: no news

        But really, for the tasks the average user puts their laptops to, they won't notice a performance hit. They might notice a battery hit, but many CPUs are faster than their user's needs. The enthusiasts (gamers etc) and professionals may notice, and they the types more likely to read tech blogs.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like