back to article Intel to Qualcomm and Microsoft: Nice x86 emulation you've got there, shame if it got sued into oblivion

Intel used the 40th anniversary of its x86 architecture to warn chip rivals to mind their step when emulating its instruction set. The reigning desktop and data center server CPU king (AMD, you know it's true) said it would not hesitate to sic its lawyers on any competitor whose emulation tools got too close to copying the …

Page:

  1. Blotto Bronze badge

    I wonder what intel would do if both M$ and Apple went ARM first on their primary os's.

    Where is Intel's ARM competitor?

    Where is Intel in the mobile phone or tablet space?

    Where is intel in discreet gpu?

    1. TechnicalBen Silver badge

      At this point...

      Would it dent Intel? That's like saying "What would Apple do if Google..." oh wait, or "What would Nvidia do if AMD/Radeon..." yeah, such companies cannot really lose to their competitors, even if they lose certain markets. As they still own all the other market share.

      1. CheesyTheClown

        Re: At this point...

        I believe this is the right direction to think in.

        Intel isn't trying to guarantee security in the mobile device market. That ship sailed. In fact, with the advent of WebAssembly, it is likely x86 or ARM will have little or no real impact now. Intel's real problem with mobile platforms like Android was the massive amount of native code written for ARM that wouldn't run unmodified on x86. With WebAssembly that will change.

        Intel is more concerned that with Microsoft actively implementing the subsystem required to thunk between emulated x86 and ARM system libraries, it will be possible now to run Windows x86 code unmodified on ARM... or anything else really.

        That means that there is nothing stopping desktop and server shipping with the same code as well. This does concern Intel. If Microsoft wants to do this, they will have to license the x86 (more specifically modern SIMD) IP. Intel will almost certainly agree to do this, but it will be expensive since it could theoretically have very widespread impact on Intel chip sales.

        Of course, Apple who has proven with Rosetta that this technology work could have moved to ARM years ago. They probably didn't because they decided instead to focus on cross architecture binaries via LLVM to avoid emulating x86. Apple will eventually be able to transition with almost no effort because all code on Mac is compiled cross platform... except hand coded assembly. Microsoft hasn't been as successful with .NET, but recent C++ compilers from Microsoft are going that way as well. The difference is that Microsoft never had the control over how software is made for Windows as Apple has had for Mac or iOS.

    2. Anonymous Coward
      Anonymous Coward

      Re: Blotto

      Intel have competitors for ARM but they are currently double the price. And x86 chips at 2-1000x the price depending on the performance chosen.

      Intel doesn't have a cost competitive competitor to ARM because the market hasn't forced them to provide one yet And I'm not sure Intel (at least as we know it) could survive on ARM margins... Those $5B+ fabs are hard to justify if you need them to make 50 billion cpus to break even...

      1. Anonymous Coward
        Anonymous Coward

        Re: Blotto

        " I'm not sure Intel (at least as we know it) could survive on ARM margins"

        ARM doesn't actually manufacture its chips, it simple patents them and licenses the design.. if Intel followed this model, it wouldn't need them $5 billion fabs!

        1. Anonymous Coward
          Anonymous Coward

          Re: Blotto

          "ARM doesn't actually manufacture its chips, it simple patents them and licenses the design.. if Intel followed this model, it wouldn't need them $5 billion fabs!"

          s/ARM/ARM ecosystem/g

          The overall cost of producing an ARM-based chip tends to be in the US$1-US$20 with US$7-US$15 being typical for smart devices on a per SOC basis including necessary licencing for ARM/third parties, fab costs and testing/QC. The fab prices are in there, but at least one or two generations of cost/technology behind Intel and utilised at a much higher level (customers are stacked up and designs are delayed if there are problems).

          Intels matching costs running everything in-house are in the US$15-US$50 range.

    3. Anonymous Coward
      Anonymous Coward

      x86 bloated!!!

      Be-jesus! Intel blog shows a picture stating that they have added more than 3.500 instructions to x86! Can that really be true? How many transistors are needed for that?

      That is why one RISC with 10 million transistors is faster than a CISC with 50 million transistors (10 million x86 transistors are needed just to figure out which x86 instruction you just read and where the next instructions starts, 20 million transistors are never used but needed for backwards compatibility. 20 million are used for cache, etc).

      1. CheesyTheClown

        Re: x86 bloated!!!

        Some would also suggest that RISC suffers similar problems when optimized for transistor depth where highly retiring operations are concerned. Modern CISC processors have relatively streamlined execution units which is what consume most of their transistors... as with RISC. However, RISC which has to increase instruction word size regularly to expand functionality suffer the burden of either requiring more instructions for the same operation as CISC, or they have a higher cost of data fetching which result in longer pipelines that can suffer greater probability of cache miss. Since 2nd level cache and above as well as DDR generally depend on burst for fetches, RISC with narrow instruction words can be a problem. Also consider the pipeline optimization of RISC instructions which may have branch conditions on every instruction can be highly problematic for memory operations.

        Almost all modern CPUs implement legacy instructions (such as 16-bit operations) in microcode which executes similar to a JIT compiler that compiles instructions in software.

        Most modern transistors on CPUs are spent on operations such as virtual memory, fast cache access and cache coherency.

  2. Ken Hagan Gold badge

    This sounds a lot like Microsoft's attempts to spread FUD about how many patents protected FAT. Patents concerning how the ISA is implemented in hardware cannot possibly be relevant to a software emulation. Patents concerned with how the ISA was cleverly designed to enable an optimal implementation may not be relevant either if some court decides (as with the Java lawsuit) that you can't protect an interface. The risk for Intel is that this strategy back-fires and emulation is found in court to be entirely legal.

    1. DougS Silver badge

      No risk for Intel

      When Microsoft introduces the capability, Intel sues. Case spends a few years in trials and appeals, meanwhile adoption of Windows on ARM is very limited because of uncertainty about its headline capability of emulating x86 Win32 apps.

      Even if Intel loses the court case eventually they get several more years of x86 being the only alternative for running Windows, and pocket billions as a result. If they end up having to pay Microsoft's court costs its chickenfeed compared the many millions of additional x86 CPUs they'll sell.

      The real loser in all this would be Qualcomm, who would have the Snapdragon 835 ready to go for PC OEMs to install in low end Windows/ARM PCs, but have few takers. And potentially Apple, if they are planning to migrate the Mac to their ARM SoCs, without losing the ability for their customers to run Windows apps (it is unclear whether they want to do this, or whether losing the ability to run Windows apps is what has prevented it so far, but it is possible)

      1. Eddy Ito Silver badge

        Re: No risk for Intel

        I can still see them being slapped early which would then force them into playing defense and pushing it through appeals. Each setback is going to greatly embolden folks to give it a try. Before you know it everyone from Nvidia to Sunway to Power has Windows running via emulation. It might not make sense for Sunway or Power but I have to believe Nvidia would love to have Windows running on their Tegra series of ARM SOCs and I'd think the graphics side would be a snap.

        1. DougS Silver badge

          Re: No risk for Intel

          Why would they be 'slapped early'. Intel's patents definitely hold for actual chip implementations, the only question is whether they also hold for software emulation of the chip. I wouldn't be surprised if Intel ends up winning the case, but if they don't it isn't going to be something that is shot down quickly.

          1. Eddy Ito Silver badge

            Re: No risk for Intel

            Why would they be 'slapped early'

            If I could read judges minds, I'd have retired long ago. Having not read the patents myself, I can see it happening simply because it seems to me that very often in cases like this the judges are a fickle and non-technical lot and at times it's largely a roll of the dice with the decision hinging on which lawyer is more eloquent and convincing that particular day and less the actual bare facts. Hell, if it was solely based on facts, the judge could just read the patent in chambers, compare it to the implementation, and thus save everyone a bunch of money but that never happens. Most times I'd be rather surprised to find the judge ever actually read the patent word for word, especially the long winded ones that run on for pages on end.

    2. Simon Harris Silver badge

      How the ISA was cleverly designed...

      To make it easy to translate 8080 assembly code from the 1970s to run on it...

      With the 16-bit operations and 1MB addressing bodged onto the top (8086)...

      And 16MB addressing bodged on top (80286)

      And 32 bit operations and more address space bodged on top (80386)

      ...

      1. toejam

        @Simon Harris - "How the ISA was cleverly designed..."

        Intel's heart was in the right place when they made many of their ISA and chip decisions. They just didn't execute them very well.

        Imagine if segments on the 808x were page (256B) aligned instead of paragraph (16B) aligned. And had they released a 80186 core in a 8086 package. And had they released a 80286SX that made the MMU an optional external chip (like the MC68451 and '851). It would have made life prior to the 80836 cheaper, faster, and a whole lot less miserable (no need for EMS or XMS).

        For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set, V86 mode, paging, real/prot mode switching, etc...

        It just sucks that neither Microsoft nor Digital Research released a proper 32-bit successor to DOS at the time. Imagine a lightweight text-mode version of Windows 95 back in '86. Instead, you had to muck with DOS extenders or go down the expensive path of a GUI-based OS, like OS/2 or Win 2.x/3.x. Yuk.

        1. This post has been deleted by its author

        2. LDS Silver badge

          In 1986 most people lacked the RAM to run a real multitasking OS. Many PCs had barely the RAM to run a real mode applications, or the early GUI systems, usually not more than 1-2MB. 4MB would have become a standard only around 1994.

          While multitasking, forbidding direct hardware access, would have required most applications to be rewritten deeply - and that was exactly what many software developers back then feared, when real mode application still sold like hotcakes - often at prices of some hundred dollars (people forgot how software was expensive then). Many failed the transition to Windows (and OS/2) exactly for that reason.

          There were attempts to write some "DOS better than DOS", but all of them failed because lack of interest, but some DOS extenders. GUI offered advantages that were too big to be ignored, and still the obstacles to rewriting applications meant the failure of many DOS companies.

        3. Simon Harris Silver badge

          @toejam

          "For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set"

          In other words, the issues that Motorola got more-or-less right in the first place with the 68000. While I've spent almost all my professional life dealing with Intel based systems, I've always thought the way Motorola broke with the 8-bit architecture* when producing a more advanced processor was a better approach.

          Certainly, some of your suggestions (e.g. 256 byte aligned segments) would have given a 16MByte address range (comparable with the the 68000) and 80186 instructions would have been nice to have from the start (if I remember correctly, there were some 80186 'almost PC compatibles' around - the 80186 built in peripherals and associated interrupt map didn't quite match those used in a standard PC), but going from the 8080 to pentium class and beyond CPUs seemed more like a fade-in rather than a step-change, with current generations carrying all the baggage of previous ones.

          In a sense, keeping all the previous baggage makes people (including me) lazy/stingy (you decide!) - I was still using software originally written for an MSDOS 3.1 8086 machine when I had a 486 machine with Windows98SE.

          *admittedly not entirely, as some instructions were designed to allow easy use of their 8-bit IO devices.

          1. LDS Silver badge

            Re: @toejam

            Flat memory will have to go away because it is insecure. It's simpler, but when every bit is readable/writable/executable you have a security issue. Intel segments had security access controls - i.e. a segment could be executable, but not readable and of course not writable. While data segments could be not executable at all, and read-only. Try to implement ROP in such an environment... many other code injection techniques would fail.

            AMD was very shortsighted when it throw them away in x64. Intel was ahead of times, in 1982.

            If in the future we want OSes secure from the ground up - CPU will need to re-introduce way to protect memory beyond simple NX bits for pages, and give proper access rights to every piece of memory, depending on what it is used for.

            Having multiple execution rings was also a good idea - privilege escalations would be much harder if not everything below user mode was running in the highest privilege mode.

        4. CheesyTheClown

          Instruction stayed the same, the core changes

          You have a lot of great points. I always considered the 64KB page to be a smart decision when considering backwards compatibility with 8085. It also worked really well for paging magic on EMS which was not much more difficult to manage than normal x86 segment paging. XMS was tricky as heck and DOS extenders were really only a problem because compiler tech seemed locked into Ohar Lap and others $500+ solutions at the time.

          I don't know if you recall Intel's iRMX which was a pretty cool (though insanely expensive) 32-bit DOS for lack of a better term. It even provided real-time extensions which were really useful until we learned that real-time absolutely sucks for anything other than machine control.

          Also, DOS was powerful because it was a standardized 16-bit API extension to the IBM BIOS. A 32-bit DOS would have been amazingly difficult as it would have required all software to be rewritten and since nearly everything was already designed to use paged memory. In addition, since most DOS software avoided using Int21h for display functions (favoring Int10h or direct memory access) and many DOS programs used Int13h directly, it would have been very difficult to implement a full replacement for DOS in 32-bit.

          Remember; on 286 and sometimes on 386, entering protected mode was easy, but switching back out was extremely difficult as it generally required a simulated boot strap. That means to access 16-bit APIs from 32-bit may not have been possible. They would have had to be rewritten. For most basic I/O functions that wouldn't be problematic, but specifically in the cases of non-ATA (or MFM/RLL) storage devices, the API was provided by vendor BIOSes that reimplemented Int13h. So, in order to make them work, device drivers would not have been optional.

          In truth, the expensive 32-bit windowed operating systems with a clear differentiation between processes and system-call oriented cross process communication APIs based on C structures made more sense. In addition, RAM was still expensive with most systems still having 2MB of RAM or less, page exception handling and virtual memory made A LOT of sense as developers had access to as much memory as they needed (even if it was page swapped virtual memory).

          I think in truth, most problems we encountered was related to a > $100 price tag. Microsoft always pushed technology by making their tech more accessible to us. There were MANY superior technologies, but Microsoft always delivered at price points we could afford.

          Oh... and don't forget to blame Borland. They probably were the biggest driving factor behind the success of DOS and Windows. By shipping full IDEs with project file support and integrated debuggers (don't forget second CRT support) and integration with assembler (inline or TASM) and affordable profilers (I lived inside of Turbo Profiler for ages). Operating system success has almost always been directly tied to accessibility of cheap, good and easy to use development tools. OS/2 totally failed because even though Warp 3.0 was cheap, no one could afford a compiler and SDK.

    3. TheVogon Silver badge

      "This sounds a lot like Microsoft's attempts to spread FUD about how many patents protected FAT."

      Microsoft won nearly every patent case regarding that I believe. Just like everyone now has to pay them to use exFAT.

      1. Anonymous Coward
        Anonymous Coward

        "Just like everyone now has to pay them to use exFAT."

        And yet the MS fans still wonder why so many techies hate that company so much.

        1. phuzz Silver badge
          Facepalm

          "And yet the MS fans still wonder why so many techies hate that company so much."

          Yeah, screw those guys for working to develop a file system and actually expecting other companies to pay to use it.

          1. Anonymous Coward
            Anonymous Coward

            "Yeah, screw those guys for working to develop a file system and actually expecting other companies to pay to use it."

            If it was for internal (e.g. Windows) use then sure. When it's forced into other standards (looking at you, SDXC) and makes an entire ecosystem outside of them dependent on it, then yes absolutely screw them.

            A filesystem that is aimed primarily at exchanging data between devices shouldn't be held ransom to a single company and should be openly documented and standardised. Look at UDF for example - it's a shame that doesn't see wider adoption.

            All this while they shout out about how they love Linux and are encouraging interoperability between devices. It's hard to take that message seriously while they still play games like this.

    4. Archivist

      FUD

      Yes, and the art is for Intel not to threaten Microsoft, but the end users. I can just see see a buying dept making a choice between an established product (x86) and one that they might get sued for (ARM). No contest.

    5. 520

      Emulation has already been found to be entirely legal. See Bleemcast vs Sony Computer Entertainment.

      1. DougS Silver badge

        @520 - Sony v Bleemcast

        AFAIK, that lawsuit revolved around copyright, not patents, so isn't applicable to this case.

  3. Old Used Programmer

    40 year old tech....

    ....20 year patent protection is what immediately came to mind. They'll only be able claim patent protection on the most recent half of what they've done, if they can claim it at all.

    1. Jack of Shadows Silver badge

      Re: 40 year old tech....

      unfortunately, most/much of that portfolio is around SIMD and right there is the basis of their threat. Still, I'd not miss the IME security hole disappearing.

      1. Voland's right hand Silver badge

        Re: 40 year old tech....

        most/much of that portfolio is around SIMD

        SIMD is by no means an Intel/x86 specific invention. There are similar instruction sets in other architectures - PPC, MIPS and most importantly - Neon in ARM. I have some doubts that Intel will be successful trying to press anything vs ARM in SIMD land.

        1. DougS Silver badge

          Re: 40 year old tech....

          The patents cover SSE and AVX specifically, and are the reason why AMD introduced their '3DNow!' instructions instead of SSE - Intel didn't grant them a patent license for SSE. When AMD introduced their 64 bit extension, obviously Intel needed access to that, so they signed a full cross license which is why AMD was able to support Intel's SIMD implementations of SSE and AVX and drop 3DNow!

          1. tygrus.au

            Re: 40 year old tech....

            Intel had started work on the 64bit extension to x86 when AMD was talking with Microsoft about x86-64 support. Microsoft made it clear to Intel, MS would only support 1 x86-64 version and AMD was going to be first to market and win out. Intel was already in conflict with AMD over newer extensions (SSE etc) and threats of anti-trust lawsuits in the EU. Intel chose to take the easy road and cut a deal with AMD for cross-licensing between them. With a few minor changes to the core, changes to the decoders and microcode Intel got all but a few instructions completely compatible (I remember their was a early bug where Intel CPU didn't quiet match the AMD behaviour). Intel copy & pasted AMD ISA, find&replace AMD64 with IM64T and Intel regained market domination starting with the Core 2 series.

            1. DougS Silver badge

              Re: 40 year old tech....

              Actually Intel already had 64 bit support in shipping P4 CPUs when AMD announced Opteron/Athlon 64 and got Microsoft's buy-in.

              Intel wanted to push everyone to Itanium to get 64 bits - first on servers, then workstations, eventually consumer PCs/laptops down the road, since it was fully patented and would be a legal monopoly with no pesky AMD nibbling at their heels. They had 64 bit support ready to go in the P4 in case they ran into issues, but the one thing they didn't foresee was Microsoft supporting an AMD developed 64 bit implementation. Because Microsoft said they'd only support one, it was too late and Intel had to scramble to implement AMD's version of 64 bits. Because Itanium didn't have that push behind it any longer, Intel's investment in it dried up and it is currently on its last version (contractual requirement with HP, who co-developed it with them)

      2. thames

        Re: 40 year old tech....

        @Jack of Shadows - "unfortunately, most/much of that portfolio is around SIMD and right there is the basis of their threat."

        This is software emulation, not a new chip. Actually it's probably not even real emulation, but rather binary translation. That is, it would be cross-compiling x86 binary instructions to corresponding ARM binary instructions.

        ARM has its own SIMD. If the emulator can translate x86 SIMD binary instructions directly to corresponding ARM SIMD binary instructions, then there's no problem as the ARM chip is implementing it directly already.

        The only way Intel's patents can mean anything is if their x86 chip is doing something related to SIMD that doesn't exist in ARM. And it if is doing that, then the ARM chip has to do it using normal non-SIMD instructions anyway. It's pretty easy to imagine the binary translator seeing an x86 SIMD instruction that doesn't exist in ARM, and just calling a library function that accomplishes the same thing using conventional instructions. I can't see Intel's patents coming into play in that case.

        I've been doing a bunch of work using SIMD recently, and what I can say about Intel's SIMD instruction set is that there may be a lot of instructions but that's mainly because there are multiple different overlapping sets of instructions that do very similar things. They just kept adding new instructions that were variations on the old ones while also retaining the older ones, resulting in a huge, tangled, horrifying mess of legacy instructions which they can't get rid of because some legacy software out there might use it.

        Off the top of my head, the only SIMD feature that I have run across so far that Intel has a unique patent on is a load instruction which has the ability to automatically SIMD align arrays which were not aligned in RAM. It sounds great, but it's not really as big a deal as you might think, since good practice would have you simply align the arrays to begin with when you declare them. It's mostly of use to library writers who want to be able to handle non-aligned as well as aligned arrays for some reason. You take a performance hit for that flexibility however.

        I suspect that software publishers, including Microsoft, will offer native ARM ports for the most popular applications rendering this moot so far as they're concerned.

    2. yet_another_wumpus

      Re: 40 year old tech....

      17 (in August) year old tech from AMD publishing the 64 bit spec. No idea when AMD filed the patents (current US law is 20 years from filing), but they can't last much longer.

      While "patent troll" might be specific to "non-practicing entities" "patent abuse" is certainly part and parcel of large tech firms. Other than the insanity of "shield patents" (good God, why should these be necessary), I can't see a company filing for a patent for a non-troll purpose.

  4. Mikel

    East Texas

    I read somewhere that the litigious-friendly court of East Texas had been defanged by a ruling the complainant had to file in their own district.

    1. TechnicalBen Silver badge
      Trollface

      Re: East Texas

      I really wish I'd taken an idea to swamp the courts in Texas with thousands and millions of frivolous cases so none of these patent ones could go through. I'd make a nice little penny if I did from all the companies wanting to avoid the real case fees!

    2. DougS Silver badge

      Re: East Texas

      As the battle between Samsung and Apple in California shows, while East Texas may be the most patent holder friendly it isn't like the other districts will immediately quash any such lawsuits. We might be reading about Intel vs Microsoft with Judge Koh presiding in a couple years...

  5. Nolveys Silver badge
    Meh

    Intel welcomes lawful competition.

    Cough.

    1. Steve Knox Silver badge
      Happy

      Of course they welcome lawful competition

      They've spent billions of dollars over the years writing those laws, they'd love it if their competitors followed them...

    2. werdsmith Silver badge

      Intel are in the right to defend their patents suppose, but that doesn't stop me hating them for it.

      1. Prst. V.Jeltz Silver badge
        Happy

        Qualcomm, however, was unfazed by the warning. The chip designer pretty much told Intel to go fsck itself.

        PMSL

  6. John Savard Silver badge

    Briar Patch

    It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh.

    But Microsoft also wants to encourage people to sell applications through the Windows Store, and to write them in managed code for the new post-Windows 8 interface formerly known as Metro.

    So if Intel manages to hobble x86 emulation on Windows for ARM (cases concerning z/Architecture emulation on the Itanium come to mind as a precedent) this may not be a total disaster for Microsoft.

    1. Updraft102 Silver badge

      Re: Briar Patch

      If MS thought they could get by without the emulation, they would not have put much time, effort, and cash into developing it. They're hoping eventually to have a lot of UWP apps that will allow them to deprecate Win32, but enough of those apps don't exist now, and a device that can only run UWP is presently dead in the water (like Windows phone). Unless and until that UWP library exists, that emulation is going to be the only thing that makes a Windows ARM device usable for the people who need more than what the few existing UWP apps can do (nearly everyone, in other words).

    2. kryptylomese

      Re: Briar Patch

      "It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh."

      What are you talking about - Linux can run legacy software including that written for Unix and it doesn't have compatibility problems between versions.

      Windows ties you into versions very tightly - this is not attractive to anyone except Microsoft.

      Please tell us more about "attractive" Windows server because the market has spoken and it is on its way out!

      1. Anonymous Coward
        Anonymous Coward

        Re: Briar Patch

        "Linux can run legacy software including that written for Unix and it doesn't have compatibility problems between versions"

        Rubbish, stuff is always breaking on Linux because of dependency changes and version updates.

  7. John Brown (no body) Silver badge

    emulation?

    Isn't the x86 instruction set just an emulation on Intel hardware anyway? Isn't that why CPU microcode updates are possible?

  8. Wade Burchette

    Tough Times at Santa Clara

    Intel is in a dangerous position right now. They have the capital to recover but they must be prudent. Many people don't realize how good of a design Ryzen is for the upper-upper end. When Intel needs to make a 16 core chip, they have to make a large 16 core die. When AMD releases Threadripper and Epyc, to get to 16 cores, they can just connect two 8 core chips together. AMD calls it Infinity Fabric. The cost to make a 16 core die is significantly higher than the cost to make two 8 core dies. And that is even considering Intel has some of the best fabs in the world. The Core i7/Xeon may be faster for games, it won't be able to compete with AMD for price/performance. A 16 core Xeon might be better than a 16 core Epyc, but not $1000 better. And with the Ryzen design, AMD can make a 32 core CPU, price as much as a 16 core Xeon, and still make a huge profit.

    Add to all that the pressure of ARM CPU's. I don't know how the future plays out, but if people ever decide they don't need legacy support, ARM has an easy path to desktops/laptops.

    And why did Intel make Thunderbolt an open standard? What did they gain? This was a prized plum for them. It kept Apple locked in to Intel. But Apple has been getting closer to AMD lately. Look at the new iMac pro with a Vega GPU in it. Some people feel that Apple put pressure on Intel to open Thunderbolt. If so, Apple could be using a future Ryzen APU as leverage for better prices.

    Intel has the money and engineers to copy AMD's design. But even if they started today, it still would be years before such a design came to market. Intel won't be able to put illegal pressure on OEM's anymore so they will less AMD like in the Athlon 64 days. If they do, the fines probably wouldn't be worth it this go around. The best they can do is to continue to pay for ads for OEM's like they do now. (That is standard in businesses. My friend has a HVAC business. When he sells a lot of A/C's in a particular brand, that company buys an ad for his business proportional to the amount he sells.) At least Intel understands marketing, unlike AMD.

    Intel better plan for the future well. We need Intel. And AMD. And Qualcomm. And NVidia. And other CPU/GPU companies. When there is competition, we all win.

    1. Yet Another Anonymous coward Silver badge

      Re: Tough Times at Santa Clara

      Or Microsoft / Apple / Softbank just buy ASML.

      Hey Intel nice chip design you have there, be a shame if you couldn't build a fab wouldn't it !

    2. bazza Silver badge

      Re: Tough Times at Santa Clara

      There's also the question of ARM servers. There's a big chance that the big data centre operators might go for ARM there, to save power. Data centres is where Intel's profit comes from these days. If that starts happening then Intel is in deep trouble.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019