back to article SPOILER alert, literally: Intel CPUs afflicted with simple data-spewing spec-exec vulnerability

Further demonstrating the computational risks of looking into the future, boffins have found another way to abuse speculative execution in Intel CPUs to steal secrets and other data from running applications. This security shortcoming can be potentially exploited by malicious JavaScript within a web browser tab, or malware …

  1. Chronos Silver badge
    Devil

    So, in a nutshell?

    Chipzilla's performance advantage over the competition may be largely due to cut corners and half-arsed security with regards to memory allocation and organisation? Interesting take-away point if nothing else. If only AMD could ditch the SP JudasPuter and give us trustworthy products...

  2. Will Godfrey Silver badge
    FAIL

    Not news now?

    This is getting so common it's verging on boring.

    Hmm rather like woodworm it is. It's boring into the CPU but leaving the exterior looking solid.

  3. Milton Silver badge

    Well I never ...

    I bought my last CPU purely on bang-per-buck criteria, needing then (4/5 yrs ago), for a client, to model parallel molecular simulations which we would later scale to Big Server installations (eventually discovering—to the great surprise of no one now—that the CPU was better used to orchestrate the heavy lifting done in GPUs) but that machine remains on my desktop with its water-cooled beast of an AMD chip, still rocketing along. I'm aware that AMD architecture is not immune from all Spectre-type attacks, but it seems to be less vulnerable overall: a pleasant little extra, I guess, from a CPU which has provided bulletproof high performance for so long now (fingers crossed). Cannot claim any clever foresight, though.

    1. Ben Tasker Silver badge
      Joke

      Re: Well I never ...

      Just wait though, at some point someone'll prove that Homeopathy really works, and then they'll figure out how to combine that knowledge with TEMPEST to steal your private keys from the fluid in your cooling system.

      I'm having a weird day.....

      1. tekHedd

        Re: Well I never ...

        homeopathic water cooling: it works but you have to use very small amounts of very hot water, which causes the cpu to cool down. Somehow.

        1. Fungus Bob Silver badge

          Re: Well I never ...

          Yes, use negative quantities of hot water...

        2. Mycosys

          Re: Well I never ...

          Ironically, spraying small amounts of hot water on something that is over boiling point is a REALLY effective way to cool something, the energy removed by evaporation is enormous. Way things are going it wouldnt surprise me if intel started making cpus that ran hot enough to use it XD

      2. Ian Emery Silver badge
        Trollface

        Re: Well I never ...

        Going to send this as a "fact" to Fox and Friends, see if they fall for it, and then the Donald will start tweeting about how it is all Hilary's fault.

        1. Alan_Peery

          Re: Well I never ...

          Please leave the Donald outside these forums, we see enough of his disasters elsewhere.

          Signed, a fatigued American...

          1. ROC

            Re: Well I never ...

            Especially the even loonier responses of the Democrats. These children simply refuse to play together nicely. A plague on all their houses.

    2. Wayland Bronze badge

      Re: Well I never ...

      If the computer is running simulations rather than Internet traffic I don't think there is a problem.

  4. ecofeco Silver badge

    I am always disappointed in modern computing

    How the hell the modern world decided on the worst possible bollocks for the majority of modern computing is dumbfounding.

    1. Doctor Syntax Silver badge

      Re: I am always disappointed in modern computing

      Not really when you think about it. Remember the Iron Triangle. If you take the product that's first to market and the manufacturer has cut costs - as they nearly all do - in getting there then the quality is what has to go.

    2. wayne 8

      Re: I am always disappointed in modern computing

      They make no attempt to hide the cause. "INTEL INSIDE" Get it?

      And the matching operating system, Windows, through which to watch.

      1. Archtech Silver badge

        Re: I am always disappointed in modern computing

        "The market" (in other words, average people) preferred cheap, ready now and tolerably easy to use. So better, more expensive systems were squeezed out.

    3. Anonymous Coward
      Anonymous Coward

      Re: I am always disappointed in modern computing

      not just computing, its the standard process for all changes, natural and artificial. its the same evolutionary principles that got you from amoeba to homo sapiens: build, test in wild, fall over, rebuild.

      in the process, the failed items are eventually whittled away leaving the battle hardened solutions out in the wilderness.

      there will never be a 'perfect' solution from the outset - regardless of how you define 'perfection'. the universe doesn't work that way. even if you remove the wonks who cut corners to hit those delivery deadlines.

      1. Archtech Silver badge

        Re: I am always disappointed in modern computing

        " its the same evolutionary principles that got you from amoeba to homo sapiens: build, test in wild, fall over, rebuild".

        Or, more often, "build, test in wild, fail, die"..

      2. Unicornpiss Silver badge
        Meh

        Let's extend this a bit further..

        I'm always disappointed in the substitution of greed and hubris for common sense in the business world, especially in its leadership. So that a percent of a percent of the populace can make a little more money, things are rushed to market with the bare minimum of CYA testing, and often despite warnings from more technical folks that often face censure for rocking the boat. (not even counting the poor decisions in the finance world that led to the mortgage crisis and recession in the US some years back)

        As a result, we all suffer things such as slow, aggravating, buggy operating systems, security issues, product recalls, spontaneously-combusting phones and the general malaise of: "It could have been great, if only.." as we pay more as consumers to finance these mistakes.

        1. wtrmute
          Joke

          Re: Let's extend this a bit further..

          Don't worry, though: when GNU/Hurd comes out, it absolutely won't have any of those issues!

        2. CrazyOldCatMan Silver badge

          Re: Let's extend this a bit further..

          things are rushed to market with the bare minimum of CYA testing

          AKA "Agile"..

    4. cam

      Re: I am always disappointed in modern computing

      "As I hurtled through space, one thought kept crossing my mind—every part of this rocket was supplied by the lowest bidder."

      Most likely a false quote, but the spirit of it stands.

      Price or value? Want low-cost Broadband? Get minimum-quality product, and low-quality support.

      Add to this the 'throw-away' society with a manufacturing sector only too happy to re-sell you the same bollocks every 2-3 years with a facelift and a promise that this version will be much better than the last.

      I went with AMD. Mwaha!

      1. Splork

        Re: I am always disappointed in modern computing

        "Add to this the 'throw-away' society with a manufacturing sector only too happy to re-sell you the same bollocks every 2-3 years with a facelift and a promise that this version will be much better than the last."

        Wow! This is exactly the the sales pitch for MS Windows. It always has been and will be the "most secure Windows EVER!" Will wonders ever cease? I'd like to think so here at at Happydale :-&

      2. Gezza

        Re: I am always disappointed in modern computing

        Isn’t that from Armageddon? The bloke who goes whacko on the asteroid and rides the drill head like Kong in Dr Strangelove then shoots everything up with the remote gun.

        1. InNY

          "As I hurtled through space, one thought kept crossing my mind... the lowest bidder."

          Colonel John Glenn. First American to orbit the earth, Friendship 7 mission, 1962

          You're thinking of the character Lev Andropov (Peter Stormare) in Armageddon, who said "Components? American components, Russian components, All made in Taiwan"

      3. Archtech Silver badge

        Re: I am always disappointed in modern computing

        Actually one of the early astronauts was asked in an interview how he felt out there in the vacuum of space. His reply was to the effect that he was protected by a spacesuit to which a million parts contributed - each supplied by the cheapest bidder.

        1. Simon Harris Silver badge

          Re: I am always disappointed in modern computing

          A quote (or variations upon) sometimes attributed to John Glenn, sometimes to Alan Shepard.

          1. Ken Hagan Gold badge

            Re: I am always disappointed in modern computing

            It would be fair to assume that it was an in-joke for all of the Mercury astronauts.

    5. Daniel 18

      Re: I am always disappointed in modern computing

      "How the hell the modern world decided on the worst possible bollocks for the majority of modern computing is dumbfounding."

      Not at all.

      You can't always design to avoid a weakness that hasn't been thought of yet.

      Indeed, the weakness only exists because a lot of smart people are looking for them in architectures as now implemented. If architectures were different, they would be finding different flaws.

      At one time 'speculative execution' was one of the best practices for CPU design. Any design without it was either special purpose or a dud, from a cost/performance viewpoint.

      The only way to avoid this in the future is:

      1. Don't use good ideas other people come up with.

      2. Design absolutely perfect products that are flawless under in any possible circumstances.

      The first is silly and counterproductive.

      The second requires God-like powers of intelligence and prediction.

      Neither one is a good strategy for improving computers.

      1. Wayland Bronze badge

        Re: I am always disappointed in modern computing

        I remember when they fitted child proof lids to plastic tubs of washing powder. I had young children at the time so this was important to me. A few months later they released a new improved tub that had an additional lid for convenience. This extra lid was not childproof.

        It's not that hard to figure out when you've created a feature that bypasses security but it still happens.

        1. CrazyOldCatMan Silver badge

          Re: I am always disappointed in modern computing

          fitted child proof lids to plastic tubs of washing powder

          My arthritis medication has child-proof lids - the end result is that I can't remove the lids and my wife has to do it..

          Law of Unintended Consequences #23.

          1. ROC

            Re: I am always disappointed in modern computing

            You are supposed to then ask the pharmacist to put non-childproof lids on your bottles, which any decent one will be happy to do when you explain your need (if they don't already know you well - work on the relationship ;-} )

    6. This post has been deleted by its author

      1. Archtech Silver badge

        Re: I am always disappointed in modern computing

        These techniques are by no "stretch" of the imagination new, or even recent. The ideas are sufficiently obvious that designers were using them in the 1950s. The Intel people simply overreached; presumably the speed merchants did not talk enough with the security experts.

        "The IBM 7030 Stretch, designed in the late 1950s, pre-executes all unconditional branches and any conditional branches that depended on the index registers..."

        https://en.wikipedia.org/wiki/Branch_predictor#History

        1. Michael Wojcik Silver badge

          Re: I am always disappointed in modern computing

          The Intel people simply overreached; presumably the speed merchants did not talk enough with the security experts.

          I don't think that's a useful evaluation of the process which led to Spectre-class vulnerabilities. Yes, security researchers (as a field) have long been aware of side channels and side-channel vulnerabilities, including many which are quite similar to Spectre-class attacks. But many CPU designers were aware of them too. As with most things, the engineers had to evaluate a large number of constraints, costs, and benefits and attempt to find a sufficiently-close-to-optimal point in that space. It's a very hard problem.

          Prior to the original Spectre papers, there were no published results showing that spec-ex side-channel attacks from user-space code were a viable way to extract sensitive data. CPU designers could have guessed that they would be, and eschewed spec-ex, but that would have meant discarding a performance avenue that - as you point out - had been used since the Stretch (and the CDC 6600). It would have been very difficult to get all the manufacturers of high-performance general-purpose CPUs to agree to discard spec-ex on the basis of a suspicion. So economic forces pretty much guarantee we would have had spec-ex CPUs regardless of what Intel did.

          1. Mycosys

            Re: I am always disappointed in modern computing

            Thing is each of these flaws was introduced at a time when intel were under heavy pressure from AMD, and AMD DIDNT introduce most of the flaws intel did, and seemingly fell behind because of it, but each time managed to catch up with other architectural improvements. intel even back in the 90s was known to make much better use of cache....... now we kinda know why.

            1. ROC

              Re: I am always disappointed in modern computing

              Also, back in the 90's, the Internet was hardly the threat vector that it is now, and few anticipated that angle. It seems we need a new protective layer specifically for that source with better Javascript filtering, if that is even possible.

    7. Archtech Silver badge

      Re: I am always disappointed in modern computing

      It's fairly simple, in fact. The result of allowing financial considerations to dictate the evolution of computers on which everyone depends. In an environment, moreover, where similar financial considerations prompt "black hats" to exploit all weaknesses for their own gain.

    8. Mycosys

      Re: I am always disappointed in modern computing

      Intel desperately needed to get something that could beat AMD64, they had planned to stop making x86 CPUs entirely. They clearly cut a few corners to do so,

    9. grumpy-old-person

      Re: I am always disappointed in modern computing

      Decades ago all sorts of interesting architectural stuff was tried but found not to be feasible with the hardware of the time.

      What happened to all of this?

      Hardware architecture that provides decent protection (at least much better than what we have now!), can prevent buffer overflows and all sorts of things - probably at a performance cost, but look at what simply focusing has got.

      In the book "Elements of Programming Style" (Kernighan and Plaugher) there is the statement relative us.fast as possible!

      Imagine the uproar if a CPU appeared that had an architecture similar to IBM's SWARD!

  5. adnim Silver badge

    malicious JavaScript within a web browser tab

    Hardly requires an attacker to have a foothold on ones machine to proceed.

    Mind you I have always considered websites the Internet to be a potential attacker.

    1. ThatOne Silver badge
      Devil

      Re: malicious JavaScript within a web browser tab

      > the Internet to be a potential attacker

      Which is why Chrome has decided that any sanitizing add-on features have to go: Can't have people be protected now can we.

      1. adnim Silver badge

        Re: malicious JavaScript within a web browser tab

        @ThatOne...

        Chrome: "Can't have people be protected now can we."

        Only from competitors, not from Google.

        I ate some cynical with paranoia sprinkles once, I never did digest and pass it.

    2. Jaybus

      Re: malicious JavaScript within a web browser tab

      "Hardly requires an attacker to have a foothold on ones machine to proceed."

      Only because JavaScript has access to high precision timers. Somewhere north of 90% of JS code has no need for microsecond timing. The easy fix is to disable HR timers (performance.now, hrtime() from Node.js, etc.) in the JavaScript engine by forcing the maximum timer precision to 100 ms or so, (something longer than the OS time slice) making a timing attack from JavaScript very impractical, if not impossible. It could of course easily be made optional, so that those who dared enabled HR timers could still play their JavaScript games. A timing attack would indeed then require a foothold on ones machine.

  6. _LC_
    Alert

    The Current Spectre / Meltdown Mitigation Overhead Benchmarks On Linux 5.0

    Michael has done some benchmarks to show the impact of Spectre mitigations (hint: Linux usually handles this better than Windows and others). Check out the results on:

    https://www.phoronix.com/scan.php?page=article&item=linux50-spectre-meltdown&num=1

    In the Netperf benchmark, Intel’s 8086K performs less than 1/8th – in other words: without mitigations (which is how Intel benchmarks them;-) the processor would be more than eight times faster!

    ... and there are more to come.

    1. TechnicalBen Silver badge

      Re: The Current Spectre / Meltdown Mitigation Overhead Benchmarks On Linux 5.0

      TBF that *is* a context switch... which was not being checked before and now is (or flushed? ). So that one type of function is 1/8th the performance. The rest of the graphs might be better to help understand a mixed load difference in performance. As I doubt anyone will be doing 100% context switches in their software.

      The rest of those tests would average out to around 1/8th slower with the mitigations installed. Still bad. But not quite as extremes.

      Also interesting that AMD takes less of a hit on the newer CPUs than Intel does on their also new CPUs. So looks like AMD made improvements in architecture... Intel just overclocked things. Lol.

      1. Anonymous Coward
        Anonymous Coward

        Re: The Current Spectre / Meltdown Mitigation Overhead Benchmarks On Linux 5.0

        "Also interesting that AMD takes less of a hit on the newer CPUs than Intel does on their also new CPUs. So looks like AMD made improvements in architecture... Intel just overclocked things. Lol."

        Don't forget to include Intel's difficult transition to 10nm. It has likely cost Intel their performance lead and has already delayed hardware security features:

        Cannon Lake/Gen10 - https://en.wikichip.org/wiki/intel/microarchitectures/palm_cove#New_instructions

        Ice Lake/Gen11 - https://en.wikichip.org/wiki/intel/microarchitectures/sunny_cove

        While they don't directly affect spectre as far as I can tell, Intel will likely need a further generation (i.e. 12/13) to implement Spectre hardware fixes - on the positive side, they will likely have a new 7nm process node to drop the fixes into making a direct performance comparison between 14+++nm and 7nm harder. On the negative side, they will likely be 6+ months behind AMD at that point and be behind in speed/cores/TDP (based on leaked 7nm details) assuming AMD launches their new chips in mid 2019 as predicted.

        Intel have historically been very conservative with their CPU designs and made up for it in manufacturing efficiency/optimization - starting from behind will be a new experience and may not be a pleasant one...

      2. _LC_
        Holmes

        Re: The Current Spectre / Meltdown Mitigation Overhead Benchmarks On Linux 5.0

        I remember the first patches hitting me (Intel) hard. I was running stuff in a VM and it suddenly felt like the handbrake was on.

        Don't be mislead by the average penalties of those mitigations. It depends mostly on what you are doing - and in some cases, you're fûcked!

  7. Anonymous Coward
    Anonymous Coward

    It's a conspiracy to get us to buy new processors I tell ya and don't nobody come and ruin my conspiracy theory with facts.

    1. Anonymous Coward
      Anonymous Coward

      You don't need a conspiracy

      When money is involved. If checking and fixing is expensive... marketing and rushing out to production wins out every time.

      Also hiding a products faults seems to be the status quo.

      Still wrong, just slightly different methods and motivations.

      1. Anonymous Coward
        Anonymous Coward

        Re: You don't need a conspiracy

        Don't underestimate the cost or expense of building CPU's - each current generation of Intel processors costs in the order of $5bn in R&D/process development/etc

        Combine that with relatively long lead times - 2-5 years to design the features for the next processor, 9-18 months to get early chips for testing, 6+ months to bring production upto full capacity (i.e. addressing yield/clock speed issues).

        While there is overlap between the processes and other products to help reduce development times, the chips are extremely complex (as you would expect with 1bn+ transistors) and working around product faults with microcode/firmware/OS changes is preferable to doing nothing while you wait for fixed hardware.

        Taking a moral position on that process is difficult when I'm unsure there is a better option that involves the actual production of general purpose CPU's...

        1. Anonymous Coward
          Anonymous Coward

          Re: You don't need a conspiracy

          >Intel processors costs in the order of $5bn in R&D/process development/etc

          Well it looks like they spent the security part of the budget on one huge office party and the mother of all bags of cocaine then went back to work.

  8. Anonymous Coward
    Anonymous Coward

    I guess those guys at AMD after the years of getting beaten in benchmarks and thermal profiles can finally flip that special mode in their CPUs.....

    Smug mode.

    1. bombastic bob Silver badge
      Thumb Up

      smug mode

      42 upvotes - you're welcome

  9. ForthIsNotDead

    It's interesting...

    ...to pontificate about how we got to this point.

    I can only speculate: For decades, when it comes to CPU design technology, the prominent driver behind all scientific research has been performance. Whether that be architectural (i.e. predictive branching etc.), or physical (shrinking die sizes), electrical (reducing voltage in order to increase clock speed) - it's all been directed towards making processors faster.

    We're in a bad place at the moment, but if a similar amount of effort is spent on security research, we could be in a much better place in a relatively short time period. I'm talking five years, not 40.

    The problem is: How much longer does Intel have at the 'top'? ARM are increasingly encroaching into what was previously Intel's product space. We're beginning to see usable ARM product in the server space. Granted, it's not as fast as Intel, but for some applications, they don't *need* to be. The additional power and heat savings are impossible to ignore, also.

    Of course, this is all speculation, but I'd be interested to hear other 'Reg reader's opinions.

    1. Warm Braw Silver badge

      Re: It's interesting...

      how we got to this point

      We got to this point because historically you ran your own code on your own computer, so this type of information leakage didn't matter.

      We now run our own code on other people's computers, and other people run their code on our computers - with or without our permission. It's not just processors that haven't risen to the challenge: operating system security is still largely based on the models developed for timesharing in the 1960s.

      1. Anonymous Coward
        Anonymous Coward

        Re: It's interesting...

        @Warm Braw,

        We now run our own code on other people's computers, and other people run their code on our computers - with or without our permission.

        Yep, that is the real core of the problem. Assuming other people's code is fundamentally trustworthy without knowing who they are or where code comes from is a mistake. Assuming that one's hardware really does implement a specific machine architecture is a mistake. Couple both of those together in one PC, and one may as well not bother with passwords, etc, because there's no way of ensuring one has full control of that computer.

        I feel (only a little bit) for Intel, a seemingly existential threat to their business has lurched out of the vulture's lair, mostly because the world of the Internet has moved dramatically towards a use case that didn't really exist, what, 10 years ago?

        Arguably this is an existential threat to the likes of Google too; we might all be forced into using NoScript type plugins to remain safe, and where then is Google's services delivery platform? Coz it wouldn't be Javascript... There's not a lot they can do. If they changed Chrome so that only scripts from Google ran (which is a way of ensuring Javascript so far as Google care about it is demonstrably safe) there'd be the biggest antitrust case ever launched within minutes. They can't vet the world's Javascript code base. They could just abandon desktop Web browsers altogether, stick to native Android apps, but that'd cause a big loss of revenue.

        Well it's only going to get more interesting...

        1. tim292stro

          Re: It's interesting...

          Food for thought - if Google's business model is "selling advertising and user data", how does forcing users away from a platform where ad-blockers are possible and one doesn't control all possible access applications and standards, into a closed-source app where the author makes their own rules - hurt them?

          1. whitepines Silver badge
            Boffin

            Re: It's interesting...

            how does forcing users away from a platform where ad-blockers are possible and one doesn't control all possible access applications and standards, into a closed-source app where the author makes their own rules - hurt them?

            User A goes to install app, reads TOS. User A notes data slurp and doesn't install the app, instead looking for competing, maybe even open source, solutions. Google cannot read any data at all from User A via the app, nor serve ads via the app. User A might not even use Google basic web services and Google may not be able to serve any ads at all to User A.

            User B doesn't read obfuscated TOS, gets slurped/hacked. TOS weren't up to GDPR standards, Google gets sued.

            User C is Google's ideal user and doesn't care. They install everything Google and happily hand over their data.

            Google's raison d'etre all the sudden becomes "what percentage of the global computer using population is like User C?" A far cry from 65%+ browser market share, and eventually if Google ends up swaying one too many elections "accidentally" User C might be protected from themselves by extending existing nanny state type laws.

            Basically, I don't see a scenario where Google wins from this.

        2. Wayland Bronze badge

          Re: It's interesting...

          >there's no way of ensuring one has full control of that computer.

          I remember when we first started putting Windows 3.1 on our DOS computers. In the DOS days we knew what every file did. If a new file turned up we wanted to know where it came from and what it was for. If not needed it got deleted.

          My friend was tearing his hair out trying to keep on top of Windows 3.1 using this method. I told him to give up, that he could never know what all the files did.

          That was when we let go of total control of our PCs.

          1. Simon Harris Silver badge

            Re: It's interesting...

            Back in the DOS days a competent electronic engineer could take the cover off the computer and know what (or know where to find out) exactly what every component did - in those days it wasn't uncommon for the PC's hardware manual to come with a BIOS assembly code listing and some level of schematics.

        3. Spamfast Bronze badge
          FAIL

          Re: It's interesting...

          Couple both of those together in one PC, and one may as well not bother with passwords, etc, because there's no way of ensuring one has full control of that computer.

          Logical falacy alert! "I've been shown that my front door lock can be picked. Therefore I don't bother locking my front door now."

          Some security is better than no security - as long as you understand its limitations.

          1. Loyal Commenter Silver badge

            Re: It's interesting...

            Logical falacy alert! "I've been shown that my front door lock can be picked. Therefore I don't bother locking my front door now."

            If it were trivially easy to simultaneously attempt to pick all the locks in the world in a short timescale, and your lock was pickable, then there really would be no difference. If there is an attack vector, and it can be found and exploited at zero cost to the attacker, you will be attacked.

          2. CrazyOldCatMan Silver badge

            Re: It's interesting...

            as long as you understand its limitations

            My wife now has a default setting - to ask me if something is OK whenever she sees something she doesn't expect on the computer/smartphone..

            (and before anyone says it - I'm not dissing her skills - she was a systems programmer in the mainframe days and is now a web monkey. It's just that her skills are different from mine and she lacks my carefully-honed paranoia. So I let her borrow mine :-) She's happy, I'm happy and the computer doesn't get infected. And, when I buy us new smartphones [on the basis that the old ones won't get any more updates or custom ROMS] I got get a minimum amount of whinging and she *hates* spending money..)

          3. Jaybus

            Re: It's interesting...

            "Some security is better than no security - as long as you understand its limitations."

            Granted, however in this case it is far more insidious. In this case the lock appeared to function correctly and the resident didn't know it was flawed and easily picked. A false sense of security is far worse than some security and worse, even, than no security at all. So kudos to the discoverers.

      2. Doctor Syntax Silver badge

        Re: It's interesting...

        "We now run our own code on other people's computers, and other people run their code on our computers - with or without our permission."

        And that code gets increasingly bloated.

        1. ROC

          Re: It's interesting...

          Raises interesting point of whether a new emphasis by developers (and expectations of their customers!) on more efficient coding, to make up for reduced hardware performance due to eliminating the predictive model, could be something of a palliative?

      3. Michael Wojcik Silver badge

        Re: It's interesting...

        historically you ran your own code on your own computer

        This hasn't been true for general-purpose computing since the introduction of bundled software. Thompson memorably pointed that out in 1983 in his Turing Award lecture ("Reflections on Trusting Trust").

        Even for embedded systems, there are microcode and drivers. And what guarantees the integrity of an organization's in-house development team?

        1. Jack of Shadows Silver badge

          Re: It's interesting...

          Backdoor built into the C compiler anyone?

          1. Charles 9 Silver badge

            Re: It's interesting...

            Pit two compilers against each other and see if they trip up?

            Countering Trusting Trust

        2. CrazyOldCatMan Silver badge

          Re: It's interesting...

          Even for embedded systems, there are microcode and drivers

          As in all things, added complexity == added risks.And it's not always a 1:1 ratio either.

    2. Anonymous Coward
      Anonymous Coward

      Re: It's interesting...

      "The problem is: How much longer does Intel have at the 'top'? ARM are increasingly encroaching into what was previously Intel's product space. We're beginning to see usable ARM product in the server space. Granted, it's not as fast as Intel, but for some applications, they don't *need* to be. The additional power and heat savings are impossible to ignore, also."

      Based on revenue, Intel has a ways to go yet - they're more likely to be eclipsed by a quantum computer than ARM.

      ARM has largely NOT followed the same performance path the other CPU manufacturers because they are extremely expensive in terms of power (i.e. large caches, speculative execution and fast buses linking multiple cores) - that's not a criticism of ARM, they have a significant market and continue to deliver advances in performance, but I'm not convinced they can make the jump to the high margin server space given the number of specialist players already present (i.e. particularly MIPS) and the consolidation that has already occurred in the last 20 years.

      1. Wayland Bronze badge

        Re: It's interesting...

        > more likely to be eclipsed by a quantum computer than ARM.

        No, quantum computers are as different from our current computers as a MOOG Synth is from a BBC Micro. Totally different technology with a totally purpose but with some overlap in functions.

        ARM servers are very similar to Intel servers when both are running Linux. The overlap is 99%. The ARM are probably better at running lots of small services where as the Intel might handle heavy tasks better.

        1. Anonymous Coward
          Anonymous Coward

          Re: It's interesting...

          "ARM servers are very similar to Intel servers when both are running Linux. The overlap is 99%. The ARM are probably better at running lots of small services where as the Intel might handle heavy tasks better."

          Do you have any examples of Linux servers running on ARM servers providing similar performance levels to Silver or higher level Xeon CPU's? The ARM servers I have used provide similar performance to a low spec VM on a heavily loaded Intel server, but I acknowledge it will be workload dependent. I also haven't tested the very large core count ARM servers.

          The ARM server chips reported earlier this year (32-core 64-bit Armv8 CPU clocked up to 3.3GHz) are reported to be competitive with Xeon Gold in SPECINT - however previous ARM comparisons have shown that ARM is has always been strong in this area on a per core basis (i.e. nVidias ARM benchmarks), but drops back significantly once IO/memory bandwidth is included, resulting in a significant performance drop. When the ARM server chips are more widely available, I guess we will see if this test pattern continues.

          1. _LC_

            Re: It's interesting...

            Currently ARM servers can handle Internet requests pretty well. In this scenario having plenty of cores and dedicated hardware for the network can get you far. Faster CPUs with fewer cores don't do well here, as they consume too much energy (cooling) and space.

    3. monty75

      Re: It's interesting...

      "I can only speculate"

      That's what got us into this mess in the first place

    4. Michael Wojcik Silver badge

      Re: It's interesting...

      For decades, when it comes to CPU design technology, the prominent driver behind all scientific research has been performance

      That's been the dominant driver, but it didn't prevent manufacturers from bringing CPUs designed for other goals to market in the '80s. For example, Intel had the i432, a capability architecture. IBM had the capability-like1 AS/400 CPUs, first the CISC IMPI CPU and then PowerAS which was a tweaked POWER design. The i432's primary design driver was the enhanced security of a capability architecture. The AS/400's design was guided by the "five principles" of "technology independence, object-based design, hardware integration, software integration, [and] single-level store".

      Support for legacy software was the key economic driver for the AS/400, and that also gave us CPU families like the two Unisys ClearPath lines and IBM's 360-370-390-z line.

      So it hasn't always been primarily about performance.

      For general-purpose computing, though, it's hard to see how the economics could ever have favored anything other than a few performance metrics - operation throughput, price/performance, and power/performance.

      1As Frank Solis describes in his book about the AS/400, the '400 architecture team discarded the S/38's true capability architecture (pointers carry access-right information) for the '400, largely to accommodate issues with transient changes in access rights. However, the '400 (and now System i) still uses pointers which refer to specific objects and cannot be altered by user-mode code.

  10. devTrail

    A simple mitigation

    The first step to mitigate the issue might be simple. Stop considering the browser an operating system. Restrict Javascript engines and separate the browsers designed to run remote apps from those designed to surf the web.

    1. Arthur the cat Silver badge
      Unhappy

      Re: A simple mitigation

      Restrict Javascript engines and separate the browsers designed to run remote apps from those designed to surf the web.

      Nice theory, but unfortunately surfing the web these days usually involves reams of Javascript. I run NoScript and I'm constantly boggled by how many web sites simply won't display anything without JS enabled. Even stuff that could easily be a static page seems to need to fetch content by a JS call, and that's without enabling tracking/analytics.

      1. BlueTemplar
        Mushroom

        Re: A simple mitigation

        Well *FUCK* those websites !

        (Replied after disabling 1-rst party scripts in uMatrix here. Still works. Great job TheRegister !)

        1. Charles 9 Silver badge

          Re: A simple mitigation

          Can't. They fuck back as a good number of them are government websites. Care to make changes to your benefits and so on? Bend over or explain to your family. And no, the last brick-and-mortar office within practical driving range closed a number of years back.

          1. Duncan Macdonald Silver badge

            Re: A simple mitigation

            When untrustworthy JavaScript has to be executed - do it in a VM running a Linux Live CD (no persistent storage) - kill the VM after using the site. This will protect against the majority of JavaScript nasties (but not Spectre/Spoiler/Meltdown unfortunately).

            If you need maximum possible security - use a separate PC with no hard disk running from a Linux Live CD and shut it down after visiting the suspect site. (Inconvenient as hell but immune to all known software nasties.)

            1. Anonymous Coward
              Anonymous Coward

              Re: A simple mitigation

              I used to do that with a laptop where the whole SATA subsystem was fried. It was wonderful for testing all sorts of things. Then SWMBO found it and decided that it was a p0rn PC. In the end I had to crush it to keep her. :-(

          2. Ian Emery Silver badge
            Terminator

            Re: A simple mitigation

            Have to love the fact that many of those government websites (and not just .gov.uk), ONLY work if you are using obsolete versions of Internet Explorer.

      2. Anonymous Coward
        Anonymous Coward

        Re: A simple mitigation

        Nice theory, but unfortunately surfing the web these days usually involves reams of Javascript.

        Yes but that is what's probably going to have to change.

        Bad news for devs who would then have to develop native code and persuade people to install it. Bad news for services providers who then might have to execute software on their own infrastructure at their own expense, instead of in a user's browser at the user's expense. But especially bad news for a ton of ad funded data slurping websites.

        1. devTrail

          Re: A simple mitigation

          Bad news for devs who would then have to develop native code and persuade people to install it.

          Are you talking about the classic applications that now developers are asked to code into the browser to save administration costs? Who says that developers would take the news badly? Now they develop in this way because that's what managers ask for and anyway instead of forcing them into a browser there are a lot of different options,

          In the enterprise environment that is the most reliant of applications built into the browser now they can let the admins install the applications from remote, administration tools improved dramatically over time. In the mobile environment the app framework does not depend on the browser, now some enterprises apps are migrating to the browser, but they are exposing their customers to a lot of vulnerabilities. In the home PC environment I never saw so many applications built into the browser.

          Bad news for services providers who then might have to execute software on their own infrastructure at their own expense, instead of in a user's browser at the user's expense.

          Thanks to big data they are putting together so much computing power that it wouldn't make a big change.

          But especially bad news for a ton of ad funded data slurping websites.

          Unfortunately it's not just for the ads

          1. Anonymous Coward
            Anonymous Coward

            Re: A simple mitigation

            Thanks to big data they are putting together so much computing power that it wouldn't make a big change.

            It could make a difference to the commercial model they follow. At present all that back end compute is for analytics, data storage, data serving. Perhaps if it changes they may not be able to extract revenue, for example the profitability of analytics maybe substantially reduced. No money, no service, no matter how much compute they've currently got piled up.

      3. devTrail

        Re: A simple mitigation

        I run NoScript and I'm constantly boggled by how many web sites simply won't display anything without JS enabled

        I have the same problem. That's why I wrote this comment. All the software proposed to web developers pushes them to take shortcuts and solve every issue with some javascript. The web now depends on Javascript because big companies decided it and even open source developers bought the constant propaganda and aligned themselves to the mainstream approach. A browser that does only the browser for sure would be not appreciated by Facebook, Google, Apple, Microsoft and so on who exploit the complexity to sneak into people machines, but had it a wide enough acceptance it would make our computers way, way safer and more stable.

    2. TechnicalBen Silver badge

      Re: A simple mitigation

      A separate processor entirely? I mean, it's the internet, why is it given access to all 36* of my cores?

      *Ok, I only have 4, but someone out there has a pc like that.

      1. John Brown (no body) Silver badge

        Re: A simple mitigation

        "A separate processor entirely? I mean, it's the internet, why is it given access to all 36* of my cores?"

        Run your bowser in a VM with restricted resources?

        1. Charles 9 Silver badge

          Re: A simple mitigation

          Some of these exploits can do a Red Pill and cross the VM boundary, even become a Hypervisor Attack.

          1. Christopher Aussant

            Re: A simple mitigation

            I had not known this was a possibility.... Thank you for my something new learned today!

            1. Wayland Bronze badge

              Re: A simple mitigation

              >I had not known this was a possibility.... Thank you for my something new learned today!

              Yes crossing a VM into the host machine is what SPOILER is about.

              The big deal with the x86 CPUs is that they have security rings where programs can be fenced off and run in their own closed area. It's been pretty much the point of these rather than simply taking the CPU from an Acorn BBC Micro and supercharging it (Acorn RISC Machines > ARM).

              Breaking out of that secure area pretty much destroys all security. Being able to read any RAM in the system means reading any encrypted data in unencrypted form since it assumes RAM is secure.

  11. DJO Silver badge
    Facepalm

    I'm disappointed

    I expected the comments to be full of terrible backronyms for SPOILER

    Here's a really bad one, I'm sure you can do better:

    SPeculative Online Information Leakage Extraction Rouitines

    1. monty75

      Re: I'm disappointed

      Screw Performance, Only Implement Linear Execution Routes

      1. livin' thing

        Re: I'm disappointed

        I don't think we'll do better than those two, both of which are rather good.

      2. Charles 9 Silver badge

        Re: I'm disappointed

        Tell that to everyone with a deadline to meet. They can BS around a wrong answer but not around a missed deadline.

        1. Doctor Syntax Silver badge

          Re: I'm disappointed

          Make deadlines realistic.

          1. Charles 9 Silver badge

            Re: I'm disappointed

            They don't have control of the deadlines. The board does, and they have to answer to investors even as they eye the competition.

    2. tatatata

      Re: I'm disappointed

      Simple Processor Oversight (at) Intel Lets Everyone Read

      1. Anonymous Coward
        Anonymous Coward

        Re: I'm disappointed

        Silicon Precience Overshadows Inerrancy (of) Legal Encoding Restrictions

    3. Andy Landy

      Re: I'm disappointed

      So Poor Of Intel. Look, Everything's Revealed!

      1. Ian Emery Silver badge

        Re: I'm disappointed

        Sod Proper Order Lets Increase Executive Renumerations

  12. petef

    Another simple mitigation

    Does not ASLR mitigate against this attack?

    1. Anonymous Coward
      Anonymous Coward

      Re: Another simple mitigation

      No - ASLR randomises the layout of logical memory space to prevent you guessing where in memory key data structures such as the heap/stack lie in order to cause executable code to be placed onto them.

      Spectre looks at memory residing within processor cache across processes caused by the delay between an instruction generating a memory access violation and the instructions in the speculative execution pipeline being flushed. i.e. the "physical" cache lines vs the logical addresses.

  13. PyLETS

    Access control and process scheduling issue

    So they've discovered you can't run untrusted code at full performance on a modern CPU.

    Either they hamstring all code running on such CPUs to keep systems more secure, or OS and application designers and system operators figure out some means of working out which processes are trusted sufficiently not to steal secrets that these are allowed to run at full performance while other less trusted processes are not.

    Clearly browser tabs should not be allowed to renice themselves to a higher priority level, and lower priority levels should be scheduled in such a way that restricts their ability to exploit these weaknesses.

    1. Charles 9 Silver badge

      Re: Access control and process scheduling issue

      "So they've discovered you can't run untrusted code at full performance on a modern CPU."

      But that's what the customers demand: good, safe, fast--all or nothing. Anything who replies, "I'm sorry I can't do that" gets left for the one that says "Can do."

      It raises a real conundrum. What happens when the customer demands no less than unicorns?

      1. Anonymous Coward
        Anonymous Coward

        Re: Access control and process scheduling issue

        Just because they ask for a unicorn doesn't mean you give them one! Unicorns are dangerous and need to be chained up :O

        1. John G Imrie Silver badge
          Angel

          Re: Access control and process scheduling issue

          Virgins, you need Virgins to tame Unicorns.

          1. Arthur the cat Silver badge

            Re: Access control and process scheduling issue

            Virgins, you need Virgins to tame Unicorns.

            I'm not letting Branson anywhere near my computer.

        2. eldakka Silver badge
          Coat

          Re: Access control and process scheduling issue

          Just because they ask for a unicorn doesn't mean you give them one!

          Give me a horse (miniature, full-sized, doesn't matter), a broomstick, a whittling knife, self-tapping double-ended screws, and I'll give you a unicorn.

      2. John G Imrie Silver badge
        Trollface

        What happens when the customer demands no less than unicorns?

        Brexit?

      3. _LC_

        Re: Access control and process scheduling issue

        >>But that's what the customers demand: good, safe, fast--all or nothing. Anything who replies, "I'm sorry I can't do that" gets left for the one that says "Can do."<<

        Nah, 'the customers' didn't invent the MMU to afterwards ignore it for the sake of speed.

        1. Anonymous Coward
          Anonymous Coward

          Re: Access control and process scheduling issue

          "Nah, 'the customers' didn't invent the MMU to afterwards ignore it for the sake of speed."

          It's not an MMU issue. The MMU is providing hardware protections for the memory areas it is in full control of - the problem is that the MMU trusts the TLB to handle memory management within the CPU which in turn trusts hardware protections in the event of a memory fault.

          There are extensions to improve hardware protections in the TLB like CAT and SGX, but these are likely to be ineffective due to the root cause laying in the speculative execution pipeline, allowing cache access after a memory fault

          1. _LC_

            Re: Access control and process scheduling issue

            These all came along with the MMU. Remember Intel's domain? DOS.

            1. anonymous boring coward Silver badge

              Re: Access control and process scheduling issue

              Are you claiming things were safer before we had MMUs?

              1. _LC_

                Re: Access control and process scheduling issue

                "Are you claiming things were safer before we had MMUs?"

                I'm claiming that the system (where speculative execution simply ignores big chunks of it) doesn't work.

      4. CrazyOldCatMan Silver badge

        Re: Access control and process scheduling issue

        What happens when the customer demands no less than unicorns

        Most businesses just paint a normal horse gold/white and stick a horn on its head.

        And charge 10x the price for a normal horse.

    2. Anonymous Coward
      Pint

      Simple

      Just print the source codes in white on a white background.

      Black on a black background works well too with the advantage that you can just switch the monitor off for added security.

      (Disclaimer: I work for Gartner.)

      1. Anonymous Coward
        Anonymous Coward

        Re: Simple

        (Disclaimer: I work for Gartner.)

        *slips you £1000*

        We have a project coming up and would quite like the black option to win. Thanks...

  14. Carpet Deal 'em

    > SPOILER, the researchers say, will make [...] JavaScript-enabled attacks more feasible

    I recall one browser reducing the JavaScript timer resolution to avoid Spectre attacks. Shouldn't that sort of mitigation work against SPOILER as well?

    1. Anonymous Coward
      Anonymous Coward

      I maybe wrong, but I think someone showed that it merely slows down such attacks, rather than preventing them.

    2. Ken Hagan Gold badge

      Based on what's in the article, yes that would slow down all timing attacks by orders of magnitude. However, based on what's in the article it isn't actually clear how you'd execute this from any language that doesn't expose raw addresses. The attack requires you to fabricate a pointer with the same intra-page offset as one you want to attack. JS doesn't have such things. I suppose that an object identity might, in some implementations, be based in a predictable fashion on the actual (unseen) address, but that would also be fairly easy to fix.

      1. Anonymous Coward
        Anonymous Coward

        > JS doesn't have such things

        Even if your JIT compiler does not generate such code for certain use-cases, WebAssembly does have pointer support. It also has exception handling (e.g. longjmp) and shared memory, — both highly helpful in exploiting these vulnerabilities.

        1. Ken Hagan Gold badge

          Interesting, but it is still a VM. Just because your language has something that it calls a pointer doesn't mean you have to implement it in a way that corresponds to actual virtual addresses.

      2. Brewster's Angle Grinder Silver badge

        "The attack requires you to fabricate a pointer with the same intra-page offset as one you want to attack. JS doesn't have such things."

        Pointers are just indices; the trick is getting page-aligned memory. I've dived through several papers trying to get to the bottom of how this is being done and an old paper claims that, "An ArrayBuffer is always page-aligned." That slightly surprises me. But I would expect it to be true of SharedArrayBuffer by its very nature.

        Maybe the browsers could add jitter to the start address. But I suspect once you can allocate contiguous blocks of memory you're quids in and could figure it out. For example, it's going to be straightforward to spot a boundary between cache lines.

        The other that struck me was just how complex this all is and it results form the compounding of many factors.

  15. Anonymous Coward
    Anonymous Coward

    Imagine if you will

    Imagine, if you will, how these and similar hardware flaws would be reported if they were found in Chinese designed tech, in equipment designed and made by Huawei.

    1. Anonymous Coward
      Anonymous Coward

      Re: Imagine if you will

      I guess it would depend on whether the non-Chinese competitors were 1.5-2 years behind their Chinese competition...

      1. ROC

        Re: Imagine if you will

        and whether it was a "state actor" mandating by "law" the embedding of the vulnerabilities for their sole use to exploit...

  16. David Pearce

    It would really help if banking web applications, the most critical thing most of us use, did not run Javascript and call third party site code

    1. ROC

      So if you only use a phone for your banking that would be safer from this exploit, right? It's all the other phone vulnerabilities that worry me ...

  17. Howard Hanek Bronze badge
    Happy

    The Billboards Are Coming

    Will they grace my morning commute? Will I suddenly start seeing commercials issuing life threatening dire warnings of personal bankruptcy, indictments for pedophilia and divorce papers in my future unless you buy our 'fix'?

    ...and pity those running for public office......

  18. wayne 8

    Virtual Machines?

    Restrict access to the physical hardware?

    Separate general browsing from secure browsing.

    1. Charles 9 Silver badge

      Re: Virtual Machines?

      And slow things down when everyone's got deadlines to meet? Sorry, but almost always, when it comes to fast vs. right, fast wins.

      1. wtrmute

        Re: Virtual Machines?

        Playing Devil's advocate here, but back in the savannah, fast and nearly right kept the lions off your back, on average. We're simply not built the way you'd like us to be...

    2. Anonymous Coward
      Anonymous Coward

      Re: Virtual Machines?

      "Restrict access to the physical hardware?"

      While the example given was a web browser on a users machine, if you end up with a compromised VM host on a hypervisor you can potentially look at or alter memory in other processes.

      1. mutin

        Re: Virtual Machines? If the hypervisor is malicious?

        One reminder - Intel had BMC embedded hidden hypervisor back in 2008 - 2009. That is all from top to botton is at somebody's will. Do you think they stopped that? I doubt.

  19. JeffyPoooh Silver badge
    Pint

    Harvard vs von Neumann

    I *told* (<- high pitched "told ya so" voice) you that the Harvard CPU architecture was best. I *told* that the von Neumann architecture, with its dangerous mixing of data and instructions, was a huge mistake. But noooo...

    ;-)

    1. Charles 9 Silver badge

      Re: Harvard vs von Neumann

      But you can't run a JIT on a Harvard architecture (a JIT produces code to execute as data), and the customers demand speed.

  20. gannett

    Ouch !

    Ban JavaScript, flash and other platforms that import code.

    Trust nothing.

    Air gap trusted and performance platforms.

  21. wownwow

    Buy "Intel Inside", get "INTENDED Bugs Inside" and free all-you-can-have lip services!

    "Protecting our customers and their data continues to be a critical priority for us ..."

    "... we expect that software ... We likewise expect that DRAM modules ..."

    We expect you stupid, mute people keep buying and paying more for our INTENDED features and lip services!

    Intel Inside = INTENDED Bugs Inside = Lip-Service Ouside!

    Buy "Intel Inside", get "INTENDED Bugs Inside" and free all-you-can-have lip services!

  22. elvisimprsntr

    Universal law: Good, fast (lead time), or cheap. You can have any two, never all three. Intel obviously did not pick Good.

    If this doesn't push Apple over the edge to migrate some of their laptop line away from Intel, nothing will.

    1. Charles 9 Silver badge

      Board replies, "Bullshit. All or nothing. Now JFDI. And don't give us that Turing bit, either. You just need a to make a hypercomputer."

      Now you're stuck with searching for a unicorn or never working in this (or any) town again.

  23. -tim
    Mushroom

    1st attack to mention write?

    Spectre and its friends are mostly academic as long as they are read only. This is the 1st published one implying the ability to change memory. Once there are published public read/write attacks, then the malware people will take notice and then everyone will be shopping for a new computers. Hackers aren't so interested in hacking a system with a one in a million chance of finding a banking password but if they have a one in a hundred chance of getting to an entire password list, they will.

  24. Anonymous Coward
    Anonymous Coward

    So, no if statements then...

    > This includes avoiding control flows that are dependent on the data of interest.

    So, programs should entirely avoid if statements when running on Intel cpus?

    That seems like it'll have some impact.

  25. cb7

    I still don't get it

    Maybe it's just my feeble brain power, but it seems like the explanation goes from describing a way to determine memory locations for the kernel or other sensitive areas to somehow being able to read said locations.

    I, perhaps mistakenly, thought modern OS' didn't allow programs access to memory not allocated to them?

    Could someone cleverer than me please explain, in plain English?

    This is a sincere question.

    1. S4qFBxkFFg

      Re: I still don't get it

      I make no claims to be cleverer than anyone in this thread, but I remember reading once that if a program tries to access something it shouldn't, the time it takes to be told "No!" is useful information - i.e. it can then tell if the something is in register, cache, or memory; also, even if an instruction is forbidden, the CPU will start working away, speculatively, until told it shouldn't - which in turn affects what gets pulled from memory into cache into registers.

      Somehow*, the malicious program does this millions of times until it builds up a picture of what's in the memory that it shouldn't be accessing.

      * The "somehow" is where those more knowledgable than ourselves come in.

    2. _LC_

      Re: I still don't get it

      They mention the possibility to combine this with “Row Hammer”. Row Hammer exploits a hardware defect/design fault of dynamic random-access memory (DRAM), which allows you to “flip bits” in memory. If you know WHERE to flip a bit, you can let lose the mentioned hammer. Flipping the right bit can get you access to the entire system. For instance, you can turn a “read-only” page table into a writable one and change system code, etc. pp.

  26. heyrick Silver badge

    Huh?

    exploited by malicious JavaScript within a web browser tab [...] An attacker therefore requires some kind of foothold in your machine

    There's a bit of a difference between malware that a user got tricked into running, and "just some JavaScript" that could be hidden in any number of websites... Script blocking is good (I do it as a matter of course) but more and more sites are broken without some degree of scripting, so it's still going to be a potential problem.

  27. steviebuk Silver badge

    It..

    ...amazes me how people find this vulnerabilities. This seems so convoluted.

  28. mutin

    any good news on Intel future

    Intel definitely has a problem as a company. It is actually not a CPU maker but trying to do almost everything in IT market place. Including security. The problem is known back in 20 century when automobile conveyor manufacturing was invented. It ended up building huge factories which finally were not manageable. Simply because of the size. There is a limit of manageability. Intel, in general, reach such limit. And the problem not only in such architecture vulnerability. It has overall problem with new ideas, new research and implementation of new technologies.

    New CEO is not a technical guy. Typical monstrous Intel's response to failing research is building a huge research center in India. It that the place to find new technology ideas and well educated researches? I doubt. With all due respect to India and its culture, it is not the place known for modern technical research and availability of brains for. One may find millions of freshly baked software coders (which also far off skilled professionals) but technology research requires completely different technology culture and hundreds of tears of its development.

    1. Anonymous Coward
      Anonymous Coward

      Re: any good news on Intel future

      mutin,

      You seem to have forgotten that 'people' can travel to India quite easily ...... from say the US of A !!!

      The norm is that you 'implant' some of your best people to find/train up the best you can find in India then they are able to run on their own.

      P.S. with the size of the population on India there are *many* people to choose from, who are more than capable to do Technical high-end work of this sort.

      Your attitude is similar to the attitude that China received when starting out in the Hi-tech industries ..... now they are producing virtually everything. (Yes ... I know they have 'borrowed' much knowledge from 'elsewhere' but that has not been a hinderance !!!).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019