back to article Intel to Qualcomm and Microsoft: Nice x86 emulation you've got there, shame if it got sued into oblivion

Intel used the 40th anniversary of its x86 architecture to warn chip rivals to mind their step when emulating its instruction set. The reigning desktop and data center server CPU king (AMD, you know it's true) said it would not hesitate to sic its lawyers on any competitor whose emulation tools got too close to copying the …

  1. Blotto Silver badge

    I wonder what intel would do if both M$ and Apple went ARM first on their primary os's.

    Where is Intel's ARM competitor?

    Where is Intel in the mobile phone or tablet space?

    Where is intel in discreet gpu?

    1. Anonymous Coward
      Anonymous Coward

      At this point...

      Would it dent Intel? That's like saying "What would Apple do if Google..." oh wait, or "What would Nvidia do if AMD/Radeon..." yeah, such companies cannot really lose to their competitors, even if they lose certain markets. As they still own all the other market share.

      1. CheesyTheClown

        Re: At this point...

        I believe this is the right direction to think in.

        Intel isn't trying to guarantee security in the mobile device market. That ship sailed. In fact, with the advent of WebAssembly, it is likely x86 or ARM will have little or no real impact now. Intel's real problem with mobile platforms like Android was the massive amount of native code written for ARM that wouldn't run unmodified on x86. With WebAssembly that will change.

        Intel is more concerned that with Microsoft actively implementing the subsystem required to thunk between emulated x86 and ARM system libraries, it will be possible now to run Windows x86 code unmodified on ARM... or anything else really.

        That means that there is nothing stopping desktop and server shipping with the same code as well. This does concern Intel. If Microsoft wants to do this, they will have to license the x86 (more specifically modern SIMD) IP. Intel will almost certainly agree to do this, but it will be expensive since it could theoretically have very widespread impact on Intel chip sales.

        Of course, Apple who has proven with Rosetta that this technology work could have moved to ARM years ago. They probably didn't because they decided instead to focus on cross architecture binaries via LLVM to avoid emulating x86. Apple will eventually be able to transition with almost no effort because all code on Mac is compiled cross platform... except hand coded assembly. Microsoft hasn't been as successful with .NET, but recent C++ compilers from Microsoft are going that way as well. The difference is that Microsoft never had the control over how software is made for Windows as Apple has had for Mac or iOS.

    2. Anonymous Coward
      Anonymous Coward

      Re: Blotto

      Intel have competitors for ARM but they are currently double the price. And x86 chips at 2-1000x the price depending on the performance chosen.

      Intel doesn't have a cost competitive competitor to ARM because the market hasn't forced them to provide one yet And I'm not sure Intel (at least as we know it) could survive on ARM margins... Those $5B+ fabs are hard to justify if you need them to make 50 billion cpus to break even...

      1. Anonymous Coward
        Anonymous Coward

        Re: Blotto

        " I'm not sure Intel (at least as we know it) could survive on ARM margins"

        ARM doesn't actually manufacture its chips, it simple patents them and licenses the design.. if Intel followed this model, it wouldn't need them $5 billion fabs!

        1. Anonymous Coward
          Anonymous Coward

          Re: Blotto

          "ARM doesn't actually manufacture its chips, it simple patents them and licenses the design.. if Intel followed this model, it wouldn't need them $5 billion fabs!"

          s/ARM/ARM ecosystem/g

          The overall cost of producing an ARM-based chip tends to be in the US$1-US$20 with US$7-US$15 being typical for smart devices on a per SOC basis including necessary licencing for ARM/third parties, fab costs and testing/QC. The fab prices are in there, but at least one or two generations of cost/technology behind Intel and utilised at a much higher level (customers are stacked up and designs are delayed if there are problems).

          Intels matching costs running everything in-house are in the US$15-US$50 range.

    3. Anonymous Coward
      Anonymous Coward

      x86 bloated!!!

      Be-jesus! Intel blog shows a picture stating that they have added more than 3.500 instructions to x86! Can that really be true? How many transistors are needed for that?

      That is why one RISC with 10 million transistors is faster than a CISC with 50 million transistors (10 million x86 transistors are needed just to figure out which x86 instruction you just read and where the next instructions starts, 20 million transistors are never used but needed for backwards compatibility. 20 million are used for cache, etc).

      1. CheesyTheClown

        Re: x86 bloated!!!

        Some would also suggest that RISC suffers similar problems when optimized for transistor depth where highly retiring operations are concerned. Modern CISC processors have relatively streamlined execution units which is what consume most of their transistors... as with RISC. However, RISC which has to increase instruction word size regularly to expand functionality suffer the burden of either requiring more instructions for the same operation as CISC, or they have a higher cost of data fetching which result in longer pipelines that can suffer greater probability of cache miss. Since 2nd level cache and above as well as DDR generally depend on burst for fetches, RISC with narrow instruction words can be a problem. Also consider the pipeline optimization of RISC instructions which may have branch conditions on every instruction can be highly problematic for memory operations.

        Almost all modern CPUs implement legacy instructions (such as 16-bit operations) in microcode which executes similar to a JIT compiler that compiles instructions in software.

        Most modern transistors on CPUs are spent on operations such as virtual memory, fast cache access and cache coherency.

  2. Ken Hagan Gold badge

    This sounds a lot like Microsoft's attempts to spread FUD about how many patents protected FAT. Patents concerning how the ISA is implemented in hardware cannot possibly be relevant to a software emulation. Patents concerned with how the ISA was cleverly designed to enable an optimal implementation may not be relevant either if some court decides (as with the Java lawsuit) that you can't protect an interface. The risk for Intel is that this strategy back-fires and emulation is found in court to be entirely legal.

    1. Anonymous Coward
      Anonymous Coward

      No risk for Intel

      When Microsoft introduces the capability, Intel sues. Case spends a few years in trials and appeals, meanwhile adoption of Windows on ARM is very limited because of uncertainty about its headline capability of emulating x86 Win32 apps.

      Even if Intel loses the court case eventually they get several more years of x86 being the only alternative for running Windows, and pocket billions as a result. If they end up having to pay Microsoft's court costs its chickenfeed compared the many millions of additional x86 CPUs they'll sell.

      The real loser in all this would be Qualcomm, who would have the Snapdragon 835 ready to go for PC OEMs to install in low end Windows/ARM PCs, but have few takers. And potentially Apple, if they are planning to migrate the Mac to their ARM SoCs, without losing the ability for their customers to run Windows apps (it is unclear whether they want to do this, or whether losing the ability to run Windows apps is what has prevented it so far, but it is possible)

      1. Eddy Ito

        Re: No risk for Intel

        I can still see them being slapped early which would then force them into playing defense and pushing it through appeals. Each setback is going to greatly embolden folks to give it a try. Before you know it everyone from Nvidia to Sunway to Power has Windows running via emulation. It might not make sense for Sunway or Power but I have to believe Nvidia would love to have Windows running on their Tegra series of ARM SOCs and I'd think the graphics side would be a snap.

        1. Anonymous Coward
          Anonymous Coward

          Re: No risk for Intel

          Why would they be 'slapped early'. Intel's patents definitely hold for actual chip implementations, the only question is whether they also hold for software emulation of the chip. I wouldn't be surprised if Intel ends up winning the case, but if they don't it isn't going to be something that is shot down quickly.

          1. Eddy Ito

            Re: No risk for Intel

            Why would they be 'slapped early'

            If I could read judges minds, I'd have retired long ago. Having not read the patents myself, I can see it happening simply because it seems to me that very often in cases like this the judges are a fickle and non-technical lot and at times it's largely a roll of the dice with the decision hinging on which lawyer is more eloquent and convincing that particular day and less the actual bare facts. Hell, if it was solely based on facts, the judge could just read the patent in chambers, compare it to the implementation, and thus save everyone a bunch of money but that never happens. Most times I'd be rather surprised to find the judge ever actually read the patent word for word, especially the long winded ones that run on for pages on end.

    2. Simon Harris

      How the ISA was cleverly designed...

      To make it easy to translate 8080 assembly code from the 1970s to run on it...

      With the 16-bit operations and 1MB addressing bodged onto the top (8086)...

      And 16MB addressing bodged on top (80286)

      And 32 bit operations and more address space bodged on top (80386)

      ...

      1. Anonymous Coward
        Anonymous Coward

        @Simon Harris - "How the ISA was cleverly designed..."

        Intel's heart was in the right place when they made many of their ISA and chip decisions. They just didn't execute them very well.

        Imagine if segments on the 808x were page (256B) aligned instead of paragraph (16B) aligned. And had they released a 80186 core in a 8086 package. And had they released a 80286SX that made the MMU an optional external chip (like the MC68451 and '851). It would have made life prior to the 80836 cheaper, faster, and a whole lot less miserable (no need for EMS or XMS).

        For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set, V86 mode, paging, real/prot mode switching, etc...

        It just sucks that neither Microsoft nor Digital Research released a proper 32-bit successor to DOS at the time. Imagine a lightweight text-mode version of Windows 95 back in '86. Instead, you had to muck with DOS extenders or go down the expensive path of a GUI-based OS, like OS/2 or Win 2.x/3.x. Yuk.

        1. This post has been deleted by its author

        2. Anonymous Coward
          Anonymous Coward

          In 1986 most people lacked the RAM to run a real multitasking OS. Many PCs had barely the RAM to run a real mode applications, or the early GUI systems, usually not more than 1-2MB. 4MB would have become a standard only around 1994.

          While multitasking, forbidding direct hardware access, would have required most applications to be rewritten deeply - and that was exactly what many software developers back then feared, when real mode application still sold like hotcakes - often at prices of some hundred dollars (people forgot how software was expensive then). Many failed the transition to Windows (and OS/2) exactly for that reason.

          There were attempts to write some "DOS better than DOS", but all of them failed because lack of interest, but some DOS extenders. GUI offered advantages that were too big to be ignored, and still the obstacles to rewriting applications meant the failure of many DOS companies.

        3. Simon Harris

          @toejam

          "For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set"

          In other words, the issues that Motorola got more-or-less right in the first place with the 68000. While I've spent almost all my professional life dealing with Intel based systems, I've always thought the way Motorola broke with the 8-bit architecture* when producing a more advanced processor was a better approach.

          Certainly, some of your suggestions (e.g. 256 byte aligned segments) would have given a 16MByte address range (comparable with the the 68000) and 80186 instructions would have been nice to have from the start (if I remember correctly, there were some 80186 'almost PC compatibles' around - the 80186 built in peripherals and associated interrupt map didn't quite match those used in a standard PC), but going from the 8080 to pentium class and beyond CPUs seemed more like a fade-in rather than a step-change, with current generations carrying all the baggage of previous ones.

          In a sense, keeping all the previous baggage makes people (including me) lazy/stingy (you decide!) - I was still using software originally written for an MSDOS 3.1 8086 machine when I had a 486 machine with Windows98SE.

          *admittedly not entirely, as some instructions were designed to allow easy use of their 8-bit IO devices.

          1. Anonymous Coward
            Anonymous Coward

            Re: @toejam

            Flat memory will have to go away because it is insecure. It's simpler, but when every bit is readable/writable/executable you have a security issue. Intel segments had security access controls - i.e. a segment could be executable, but not readable and of course not writable. While data segments could be not executable at all, and read-only. Try to implement ROP in such an environment... many other code injection techniques would fail.

            AMD was very shortsighted when it throw them away in x64. Intel was ahead of times, in 1982.

            If in the future we want OSes secure from the ground up - CPU will need to re-introduce way to protect memory beyond simple NX bits for pages, and give proper access rights to every piece of memory, depending on what it is used for.

            Having multiple execution rings was also a good idea - privilege escalations would be much harder if not everything below user mode was running in the highest privilege mode.

        4. CheesyTheClown

          Instruction stayed the same, the core changes

          You have a lot of great points. I always considered the 64KB page to be a smart decision when considering backwards compatibility with 8085. It also worked really well for paging magic on EMS which was not much more difficult to manage than normal x86 segment paging. XMS was tricky as heck and DOS extenders were really only a problem because compiler tech seemed locked into Ohar Lap and others $500+ solutions at the time.

          I don't know if you recall Intel's iRMX which was a pretty cool (though insanely expensive) 32-bit DOS for lack of a better term. It even provided real-time extensions which were really useful until we learned that real-time absolutely sucks for anything other than machine control.

          Also, DOS was powerful because it was a standardized 16-bit API extension to the IBM BIOS. A 32-bit DOS would have been amazingly difficult as it would have required all software to be rewritten and since nearly everything was already designed to use paged memory. In addition, since most DOS software avoided using Int21h for display functions (favoring Int10h or direct memory access) and many DOS programs used Int13h directly, it would have been very difficult to implement a full replacement for DOS in 32-bit.

          Remember; on 286 and sometimes on 386, entering protected mode was easy, but switching back out was extremely difficult as it generally required a simulated boot strap. That means to access 16-bit APIs from 32-bit may not have been possible. They would have had to be rewritten. For most basic I/O functions that wouldn't be problematic, but specifically in the cases of non-ATA (or MFM/RLL) storage devices, the API was provided by vendor BIOSes that reimplemented Int13h. So, in order to make them work, device drivers would not have been optional.

          In truth, the expensive 32-bit windowed operating systems with a clear differentiation between processes and system-call oriented cross process communication APIs based on C structures made more sense. In addition, RAM was still expensive with most systems still having 2MB of RAM or less, page exception handling and virtual memory made A LOT of sense as developers had access to as much memory as they needed (even if it was page swapped virtual memory).

          I think in truth, most problems we encountered was related to a > $100 price tag. Microsoft always pushed technology by making their tech more accessible to us. There were MANY superior technologies, but Microsoft always delivered at price points we could afford.

          Oh... and don't forget to blame Borland. They probably were the biggest driving factor behind the success of DOS and Windows. By shipping full IDEs with project file support and integrated debuggers (don't forget second CRT support) and integration with assembler (inline or TASM) and affordable profilers (I lived inside of Turbo Profiler for ages). Operating system success has almost always been directly tied to accessibility of cheap, good and easy to use development tools. OS/2 totally failed because even though Warp 3.0 was cheap, no one could afford a compiler and SDK.

    3. TheVogon

      "This sounds a lot like Microsoft's attempts to spread FUD about how many patents protected FAT."

      Microsoft won nearly every patent case regarding that I believe. Just like everyone now has to pay them to use exFAT.

      1. Anonymous Coward
        Anonymous Coward

        "Just like everyone now has to pay them to use exFAT."

        And yet the MS fans still wonder why so many techies hate that company so much.

        1. phuzz Silver badge
          Facepalm

          "And yet the MS fans still wonder why so many techies hate that company so much."

          Yeah, screw those guys for working to develop a file system and actually expecting other companies to pay to use it.

          1. Anonymous Coward
            Anonymous Coward

            "Yeah, screw those guys for working to develop a file system and actually expecting other companies to pay to use it."

            If it was for internal (e.g. Windows) use then sure. When it's forced into other standards (looking at you, SDXC) and makes an entire ecosystem outside of them dependent on it, then yes absolutely screw them.

            A filesystem that is aimed primarily at exchanging data between devices shouldn't be held ransom to a single company and should be openly documented and standardised. Look at UDF for example - it's a shame that doesn't see wider adoption.

            All this while they shout out about how they love Linux and are encouraging interoperability between devices. It's hard to take that message seriously while they still play games like this.

    4. Archivist

      FUD

      Yes, and the art is for Intel not to threaten Microsoft, but the end users. I can just see see a buying dept making a choice between an established product (x86) and one that they might get sued for (ARM). No contest.

    5. 520

      Emulation has already been found to be entirely legal. See Bleemcast vs Sony Computer Entertainment.

      1. Anonymous Coward
        Anonymous Coward

        @520 - Sony v Bleemcast

        AFAIK, that lawsuit revolved around copyright, not patents, so isn't applicable to this case.

  3. Old Used Programmer

    40 year old tech....

    ....20 year patent protection is what immediately came to mind. They'll only be able claim patent protection on the most recent half of what they've done, if they can claim it at all.

    1. Anonymous Coward
      Anonymous Coward

      Re: 40 year old tech....

      unfortunately, most/much of that portfolio is around SIMD and right there is the basis of their threat. Still, I'd not miss the IME security hole disappearing.

      1. Voland's right hand Silver badge

        Re: 40 year old tech....

        most/much of that portfolio is around SIMD

        SIMD is by no means an Intel/x86 specific invention. There are similar instruction sets in other architectures - PPC, MIPS and most importantly - Neon in ARM. I have some doubts that Intel will be successful trying to press anything vs ARM in SIMD land.

        1. Anonymous Coward
          Anonymous Coward

          Re: 40 year old tech....

          The patents cover SSE and AVX specifically, and are the reason why AMD introduced their '3DNow!' instructions instead of SSE - Intel didn't grant them a patent license for SSE. When AMD introduced their 64 bit extension, obviously Intel needed access to that, so they signed a full cross license which is why AMD was able to support Intel's SIMD implementations of SSE and AVX and drop 3DNow!

          1. tygrus.au

            Re: 40 year old tech....

            Intel had started work on the 64bit extension to x86 when AMD was talking with Microsoft about x86-64 support. Microsoft made it clear to Intel, MS would only support 1 x86-64 version and AMD was going to be first to market and win out. Intel was already in conflict with AMD over newer extensions (SSE etc) and threats of anti-trust lawsuits in the EU. Intel chose to take the easy road and cut a deal with AMD for cross-licensing between them. With a few minor changes to the core, changes to the decoders and microcode Intel got all but a few instructions completely compatible (I remember their was a early bug where Intel CPU didn't quiet match the AMD behaviour). Intel copy & pasted AMD ISA, find&replace AMD64 with IM64T and Intel regained market domination starting with the Core 2 series.

            1. Anonymous Coward
              Anonymous Coward

              Re: 40 year old tech....

              Actually Intel already had 64 bit support in shipping P4 CPUs when AMD announced Opteron/Athlon 64 and got Microsoft's buy-in.

              Intel wanted to push everyone to Itanium to get 64 bits - first on servers, then workstations, eventually consumer PCs/laptops down the road, since it was fully patented and would be a legal monopoly with no pesky AMD nibbling at their heels. They had 64 bit support ready to go in the P4 in case they ran into issues, but the one thing they didn't foresee was Microsoft supporting an AMD developed 64 bit implementation. Because Microsoft said they'd only support one, it was too late and Intel had to scramble to implement AMD's version of 64 bits. Because Itanium didn't have that push behind it any longer, Intel's investment in it dried up and it is currently on its last version (contractual requirement with HP, who co-developed it with them)

      2. thames

        Re: 40 year old tech....

        @Jack of Shadows - "unfortunately, most/much of that portfolio is around SIMD and right there is the basis of their threat."

        This is software emulation, not a new chip. Actually it's probably not even real emulation, but rather binary translation. That is, it would be cross-compiling x86 binary instructions to corresponding ARM binary instructions.

        ARM has its own SIMD. If the emulator can translate x86 SIMD binary instructions directly to corresponding ARM SIMD binary instructions, then there's no problem as the ARM chip is implementing it directly already.

        The only way Intel's patents can mean anything is if their x86 chip is doing something related to SIMD that doesn't exist in ARM. And it if is doing that, then the ARM chip has to do it using normal non-SIMD instructions anyway. It's pretty easy to imagine the binary translator seeing an x86 SIMD instruction that doesn't exist in ARM, and just calling a library function that accomplishes the same thing using conventional instructions. I can't see Intel's patents coming into play in that case.

        I've been doing a bunch of work using SIMD recently, and what I can say about Intel's SIMD instruction set is that there may be a lot of instructions but that's mainly because there are multiple different overlapping sets of instructions that do very similar things. They just kept adding new instructions that were variations on the old ones while also retaining the older ones, resulting in a huge, tangled, horrifying mess of legacy instructions which they can't get rid of because some legacy software out there might use it.

        Off the top of my head, the only SIMD feature that I have run across so far that Intel has a unique patent on is a load instruction which has the ability to automatically SIMD align arrays which were not aligned in RAM. It sounds great, but it's not really as big a deal as you might think, since good practice would have you simply align the arrays to begin with when you declare them. It's mostly of use to library writers who want to be able to handle non-aligned as well as aligned arrays for some reason. You take a performance hit for that flexibility however.

        I suspect that software publishers, including Microsoft, will offer native ARM ports for the most popular applications rendering this moot so far as they're concerned.

    2. yet_another_wumpus

      Re: 40 year old tech....

      17 (in August) year old tech from AMD publishing the 64 bit spec. No idea when AMD filed the patents (current US law is 20 years from filing), but they can't last much longer.

      While "patent troll" might be specific to "non-practicing entities" "patent abuse" is certainly part and parcel of large tech firms. Other than the insanity of "shield patents" (good God, why should these be necessary), I can't see a company filing for a patent for a non-troll purpose.

  4. Mikel

    East Texas

    I read somewhere that the litigious-friendly court of East Texas had been defanged by a ruling the complainant had to file in their own district.

    1. Anonymous Coward
      Trollface

      Re: East Texas

      I really wish I'd taken an idea to swamp the courts in Texas with thousands and millions of frivolous cases so none of these patent ones could go through. I'd make a nice little penny if I did from all the companies wanting to avoid the real case fees!

    2. Anonymous Coward
      Anonymous Coward

      Re: East Texas

      As the battle between Samsung and Apple in California shows, while East Texas may be the most patent holder friendly it isn't like the other districts will immediately quash any such lawsuits. We might be reading about Intel vs Microsoft with Judge Koh presiding in a couple years...

  5. Nolveys
    Meh

    Intel welcomes lawful competition.

    Cough.

    1. Steve Knox
      Happy

      Of course they welcome lawful competition

      They've spent billions of dollars over the years writing those laws, they'd love it if their competitors followed them...

    2. werdsmith Silver badge

      Intel are in the right to defend their patents suppose, but that doesn't stop me hating them for it.

      1. Prst. V.Jeltz Silver badge
        Happy

        Qualcomm, however, was unfazed by the warning. The chip designer pretty much told Intel to go fsck itself.

        PMSL

  6. John Savard

    Briar Patch

    It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh.

    But Microsoft also wants to encourage people to sell applications through the Windows Store, and to write them in managed code for the new post-Windows 8 interface formerly known as Metro.

    So if Intel manages to hobble x86 emulation on Windows for ARM (cases concerning z/Architecture emulation on the Itanium come to mind as a precedent) this may not be a total disaster for Microsoft.

    1. Updraft102

      Re: Briar Patch

      If MS thought they could get by without the emulation, they would not have put much time, effort, and cash into developing it. They're hoping eventually to have a lot of UWP apps that will allow them to deprecate Win32, but enough of those apps don't exist now, and a device that can only run UWP is presently dead in the water (like Windows phone). Unless and until that UWP library exists, that emulation is going to be the only thing that makes a Windows ARM device usable for the people who need more than what the few existing UWP apps can do (nearly everyone, in other words).

    2. kryptylomese

      Re: Briar Patch

      "It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh."

      What are you talking about - Linux can run legacy software including that written for Unix and it doesn't have compatibility problems between versions.

      Windows ties you into versions very tightly - this is not attractive to anyone except Microsoft.

      Please tell us more about "attractive" Windows server because the market has spoken and it is on its way out!

      1. Anonymous Coward
        Anonymous Coward

        Re: Briar Patch

        "Linux can run legacy software including that written for Unix and it doesn't have compatibility problems between versions"

        Rubbish, stuff is always breaking on Linux because of dependency changes and version updates.

  7. John Brown (no body) Silver badge

    emulation?

    Isn't the x86 instruction set just an emulation on Intel hardware anyway? Isn't that why CPU microcode updates are possible?

  8. Wade Burchette

    Tough Times at Santa Clara

    Intel is in a dangerous position right now. They have the capital to recover but they must be prudent. Many people don't realize how good of a design Ryzen is for the upper-upper end. When Intel needs to make a 16 core chip, they have to make a large 16 core die. When AMD releases Threadripper and Epyc, to get to 16 cores, they can just connect two 8 core chips together. AMD calls it Infinity Fabric. The cost to make a 16 core die is significantly higher than the cost to make two 8 core dies. And that is even considering Intel has some of the best fabs in the world. The Core i7/Xeon may be faster for games, it won't be able to compete with AMD for price/performance. A 16 core Xeon might be better than a 16 core Epyc, but not $1000 better. And with the Ryzen design, AMD can make a 32 core CPU, price as much as a 16 core Xeon, and still make a huge profit.

    Add to all that the pressure of ARM CPU's. I don't know how the future plays out, but if people ever decide they don't need legacy support, ARM has an easy path to desktops/laptops.

    And why did Intel make Thunderbolt an open standard? What did they gain? This was a prized plum for them. It kept Apple locked in to Intel. But Apple has been getting closer to AMD lately. Look at the new iMac pro with a Vega GPU in it. Some people feel that Apple put pressure on Intel to open Thunderbolt. If so, Apple could be using a future Ryzen APU as leverage for better prices.

    Intel has the money and engineers to copy AMD's design. But even if they started today, it still would be years before such a design came to market. Intel won't be able to put illegal pressure on OEM's anymore so they will less AMD like in the Athlon 64 days. If they do, the fines probably wouldn't be worth it this go around. The best they can do is to continue to pay for ads for OEM's like they do now. (That is standard in businesses. My friend has a HVAC business. When he sells a lot of A/C's in a particular brand, that company buys an ad for his business proportional to the amount he sells.) At least Intel understands marketing, unlike AMD.

    Intel better plan for the future well. We need Intel. And AMD. And Qualcomm. And NVidia. And other CPU/GPU companies. When there is competition, we all win.

    1. Yet Another Anonymous coward Silver badge

      Re: Tough Times at Santa Clara

      Or Microsoft / Apple / Softbank just buy ASML.

      Hey Intel nice chip design you have there, be a shame if you couldn't build a fab wouldn't it !

    2. bazza Silver badge

      Re: Tough Times at Santa Clara

      There's also the question of ARM servers. There's a big chance that the big data centre operators might go for ARM there, to save power. Data centres is where Intel's profit comes from these days. If that starts happening then Intel is in deep trouble.

      1. P0l0nium

        Re: Tough Times at Santa Clara

        There is no evidence that an ARM server is any more efficient than an X86 server. Quite the reverse, in fact.

        http://www.anandtech.com/show/8357/exploring-the-low-end-and-micro-server-platforms/17

        There's no evidence, but there is a LOT of marketing :-)

        1. Aitor 1

          Re: Tough Times at Santa Clara

          In my microelectronics experience, a custom/task specific micro es 10-30x as efficient as a non specific part.

          Of course this is only for the processor.. then you have memory, fabric connectors, etc.. I still believe you could pull a 5x efficiency on a HTTP or mariadb or whatever task specific task you can give the micro.

          It is a question of determining the instructions being executed by the micros, and optimizing as much as possible the platform for those.

          It does not make sense for the kind of operations I run, but it certainly would for google, amazon, etc.

          Of course, this creates additional problems.. as then changing your stack becomes slow and expensive... your processors are custom made for your stack!!

          I would prefer we dont go that route, as those are benefits we the non titanic size operations, would not benefit from.

          We currently benefit from the investments these companies do: we can also buy these processors and put them in our servers, or rent them.. but if they all go custom, we would be left to run "legacy" platforms, and unable to compete.

          And if the cost of entering the market and exiting is huge, there would be no free market, but and oligopoly of internet companies.

        2. Steve Knox

          Re: Tough Times at Santa Clara

          Your linked article is not evidence. It's hypothesis.

          Specifically, it's a calculation based on published ratings and some tests (not linked or adequately documented) which is assumed to give a good estimate of performance per watt: "We are not pretending that our calculations are 100% accurate, but they should be close enough."

          There are also a lot of "probably"s and "assume"s elsewhere in that article.

          Evidence would be actually running specific loads on specific systems and comparing those results.

          The whole idea of "x is more efficient then y" is incomplete, lacking the vital qualifier "at z."

          1. P0l0nium

            Re: Tough Times at Santa Clara

            "We are not pretending that our calculations are 100% accurate, but they should be close enough."

            That comment was made re a decision whether the X86 Avoton or the X86 Xeon was more efficient. The ARM part was one third as efficient.

            Here's another hopeless but much hyped ARM server product:

            http://www.anandtech.com/show/9956/the-silver-lining-of-the-late-amd-opteron-a1100-arrival/3

            " The expected performance and power consumption are most likely not competitive with what Intel has available".

            And that's from Johan DeGelas - a long time ARM server cheerleader.

            1. William 3 Bronze badge

              Re: Tough Times at Santa Clara

              Why do you keep using the single course "andandtech" as evidence for your claims?

              If you can provide multiple different sources then you rule out the possibility of bias.

              You are aware of bias I hope?

              Doesn't matter who the author is, it's the publisher that decides what goes in there, they have the final say.

              A few rounds of golf, and the advertising budget is brought up, Intel has a lot of money for advertising.

              I speculate, of course, but it's a possibility, one that you need to rule out by ceasing to rely on a single source for your claims.

      2. Alan Brown Silver badge

        Re: Tough Times at Santa Clara

        So far when you crank up the ARM clock to match x86 speeds the power goes up to match them too.

        Intel's been staving off ARM for some time by concentrating on power consumption but sooner or later they're going to lose.

    3. P0l0nium

      Re: Tough Times at Santa Clara

      Re: 4 small chips = 1 big chip ... If this was true then the world would have been full of Muti-Chip-Modules many decades ago: And its not :-)

      And who says that 8 Cores in 200sqm marks the best price-performance compromise ??

      Makes you wonder why Intel invested all those billions in defect reduction, doesn't it ??

      1. kain preacher

        Re: Tough Times at Santa Clara

        Re: 4 small chips = 1 big chip ... If this was true then the world would have been full of Muti-Chip-Modules many decades ago: And its not :-)

        Power PC Is.

      2. bombastic bob Silver badge
        Linux

        Re: Tough Times at Santa Clara

        "And who says that 8 Cores in 200sqm marks the best price-performance compromise ??"

        is it even POSSIBLE for Win-10-nic to run 8 cores like that? What, with Micro-shaft's ridiculous licensing policies, etc..

        Intel should market desktop Linux and multi-core-ready applications to get CPU sales up. Just sayin'.

        As for competition, let Micro-shaft emulate all they want. A good native Intel architecture will outperform emulation any day. you can also think of it as "validating the standard".

    4. enerider

      Re: Tough Times at Santa Clara

      On Ryzen: I am one satisfied customer!

      AdoredTV on YouTube has a good summary of events: the design of the Infinity fabric is to make use of the fact that smaller cores give higher yields (so you can get more chips per wafer, which means lower costs per unit, and more working cores per wafer). For example: an 8 core Ryzen is 4 units tied together, ThreadRipper will be 8 of these tied together, Epyc 16 tied together (and one rather large chip area) - but the main point is that the Zen architecture is shared up and down the line for everything - making development costs a lot easier for AMD. If this design holds up to what is asked of it: AMD can wheel out multi-core chips at a faster pace, throwing more cores in as needed.

      So yes: AMD this time around appear to have a good formula, and a solid plan for the CPU division, backed by a whole lot of engineering into a fresh architecture. Hopefully they get better on the marketing.

  9. kryptylomese

    LOL Requires CPU emulation

    This is not a problem for industry.

    Who cares about a dying crap operating system (Windows) that is tied to a particular CPU architecture?

    Linux and all its code including that written for other *Nix's is portable.

    You can talk about legacy software that only runs on Windows if you want to but if you have that in your enterprise then just continue to run it on your current hardware and replace it with something Linux based moving forwards.

  10. jbuk1

    Wasn't the x64 instruction set created by AMD so why not use that rather then the old and redundant x86 instruction set or am I getting the wrong end of the stick here?

    1. gypsythief

      I believe that the reason for this is that x64 is an extension of x86, not a drop-in replacement.

      As a result, x64 processors based on the AMD64 architecture can still run x86 software, in contrast to Intel's IA64 architecture (the infamous Itanic Itanium) which was 64bit native and incapable of running 32bit software.

      This does mean however that there is an unremovable dependancy on the x86 bits of the processor.

      1. stephanh

        Also 32 bits apps are still very common on Windows, in fact probably still more so than 64 bits apps. Even if the OS is 64 bits.

        1. jbuk1

          I don't think I've seen many windows apps which aren't released in 32 and 64 bit flavours for a long time.

          (Yes I know MS still recommend you install the 32 bit version of office. Or at least did last time I checked)

      2. Hans 1
        Headmaster

        in contrast to Intel's IA64 architecture (the infamous Itanic Itanium) which was 64bit native and incapable of running 32bit software.

        As much as I made fun of iTanic, it ended up being able to run 32-bit software ... and you are right, it was crapware ....

  11. Anonymous Coward
    Anonymous Coward

    Digital Equipment Corp did it once...

    My DEC Alpha workstations running NT4 all included a bit of kit called FX!32 that translated x86 binaries through a JITC translator into native Alpha code. It stored the results in a cache file so that subsequent executions didn't have to retranslate the same code. Translated programs ran at about 80% of the speed of native apps. It was such an important service that Microsoft included it in NT5/W2K. That is, until the Alpha was killed right around RC1.

    This was back in '99, two years after the release of MMX. I don't recall if it converted MMX instructions. And it appears as if patents on MMX and SSE might be the sticking point.

    Still, Qualcomm might be able to force Intel to license them based off of F/RAND rules if they can convince a judge that Intel's ISA meets the criteria of being an industry standard that requires licensing. Or they might withhold licensing future patents from Intel until they get a cross-license deal in return. I guess that's all up to the IP lawyers now.

    1. Steve Medway

      Re: Digital Equipment Corp did it once...

      The FX!32 process is exactly how CEMU emulates the Wii U (GPU shader instructions instead of CPU instructions, the PPC > Intel bit is 'pure' emulation).

      Nintendo are very, very into litigation, CEMU is big enough for the Big N to go after it should it decide to. It hasn't because it knows it would lose (not to CEMU dev's but to the IFF which would back it to the hilt).

      Intel is just huffing and puffing, I don't think the PC manufacturers will listen this time, Intel's pockets aren't THAT deep. Intel Inside worked by throwing money at the problem (illegally), not a threat of litigation.

  12. Doctor Syntax Silver badge

    Who needs who the most?

    Microsoft to Intel: Remember all those chips you sold on the back of Windows? We're going to start releasing the Intel versions of Windows about a year behind ARM before phasing them out altogether. We have some nice ARM to Intel emulators we could licence to you.

  13. Charlie Clark Silver badge

    Who bought Transmeta's remains?

    I seem to recall, but I could easily be wrong, that Transmeta's approach was vindicated by the courts so presumably anyone who licensed that would be reasonably safe.

    But I think there are other established methods of code interception and emulation such as that used by Rosetta in MacOS, that could be employed in software with just some kind of hardware accelerator. For Microsoft the biggest hurdle will be the shitty x86 instruction set, which is out of patent and for which they probably already have a licence. For anything that really requires SSE and similar optimisations where software emulation isn't fast enough recompiling might solve the problem, especially if .NET is being used correctly. But somehow I don't think that video encoders from 2002 are going to be high on the list of must run software.

    Intel's defence would be to get an injunction on the sale of devices but there are risks that it would be slapped down or that it would be limited to devices sold in the US. That Qualcom is a major supplier to US Department of Defense could never influence any court, could it?

    Microsoft's risk is being shut out of the fast growing mobile market altogether.

  14. Not also known as SC

    Windows HAL

    In NT4 days Microsoft used to make a big think about the Hardware Abstraction Layer (HAL), in that a program complied for one CPU could run on a version of Windows based upon a different CPU (unlike Unix IIRC). Is the HAL no longer part of the Windows design?

    1. Doctor Syntax Silver badge

      Re: Windows HAL

      It's a long time ago but AFAICR it wasn't anything to do with the instruction set but with presenting the processor with a consistent API to the hardware. The kernel is still compiled to the native code of the processor but the developer doesn't have to worry about what sort of bus etc that the CPU can see. I'm not sure how this relates to the drivers; maybe their job is to present the HAL API.

      1. Not also known as SC

        Re: Windows HAL

        @Doctor Syntax

        Thanks for the explanation. It was a long time ago that I was involved with NT. Wonder who downvoted me for asking a question though...

    2. Hans 1

      Re: Windows HAL

      Who downvoted ? I did, the following makes absolutely no ff'ing sense, so much so, I stopped reading and downvoted:

      in that a program complied for one CPU could run on a version of Windows based upon a different CPU

      1. HAL was there to present a single interface to drivers, on multiple platforms.

      2. You still had to compile the software/drivers for the target Windows/platform combination

      The question at the end is not really relevant since you got the rest all wrong, still, a simple search yields quite some interesting stuff, right:

      http://lmgtfy.com/?q=windows+10+hal

      Obligatory Space Oddity reference: "I'm sorry Dave, I'm afraid I can't do that"

    3. Anonymous Coward
      Anonymous Coward

      Re: Windows HAL

      The HAL was designed to implement all the low-level OS functions so the rest of the OS would not need to change if the underlying OS architecture changed.

      HAL implements platform specific details like I/O interfaces, interrupts, etc. There may be even different HALs for the same platform (i.e. x86 with only one processor or multiple processors, or different interrupt controllers).

      Even low-level drivers usually are built on top of HAL too - i.e. they will use HAL calls to perform I/O and manage interrupts.

      ONe notable exception is the memory manager - it's not implemented inside the HAL, although it's of course an architecture-specific module.

      But everything - HAL, memory manager, kernel, drivers, applications - need to be compiled for the actual CPU. The abstractions just simplify the code changes needed to support different platforms.

      1. John Smith 19 Gold badge
        Unhappy

        Re: Windows HAL The term you're looking for is "Virtual Machine"

        This technology is not exactly unknown.

        The AS400 had a "Machine Interface Layer" that implemented in effect an object orientated instruction set. Compilers generated code for this machine, which was visibly documented.

        During it's life the underlying processor went from a proprietary, completely undocumented hardware system, to a POWERPC processor (IE same as the AIX *nix boxes). Provided any custom software was compiled with the necessary options (basically readable symbol table) you could copy the object code to the new processor and on first running the machine would do the conversion the one and only time it needed to be done. That was in the mid 90's, when clock frequencies were an order or two slower than today.

        It's not that you can't make efficient cross platform OS's.

        It's that MS does not really want to. BTW IIRC Windows 95 had 21 layers of abstraction between a disk read and getting the data back to y our application.

        And while it continues to maintain the psychotic Bromance relationship with Intel it never will.

  15. sitta_europea Silver badge

    "Intel welcomes lawful competition..."

    There's liars, damned liars, and general counsel.

    http://ec.europa.eu/competition/sectors/ICT/intel.html

    "... and we are confident that Intel's microprocessors, which have been specifically optimized to implement Intel's x86 ISA for almost four decades, will deliver amazing experiences, consistency across applications, and a full breadth of consumer offerings, full manageability and IT integration for the enterprise," general counsel Steven Rodgers wrote on Thursday.

    Has anyone noticed that Intel CPUs seem to get very hot when they're invited to, er, do anything?

    1. Anonymous Coward
      Anonymous Coward

      Re: "Intel welcomes lawful competition..."

      Has anyone noticed that Intel CPUs seem to get very hot when they're invited to, er, do anything?

      Most probably why you don't get intel chippery inside fondleslabs... will be worse than the recent samsung fiery phablets....

  16. John Smith 19 Gold badge
    WTF?

    "always connected PC "

    WTF would anyone need that on a full fat OS if you've spent the money for a mainstream processor. ?

    F**k right off.

  17. Anonymous Coward
    Anonymous Coward

    "Is the HAL no longer part of the Windows design?"

    Yes it's still there. Windows is a modern hybrid microkernel architecture and is fully modular.

  18. Fenton

    AMD

    Interesting one for AMD though. They hold an x86 license and have an ARM chip, could they legally bake x86 emulation into ARM?

    1. Anonymous Coward
      Anonymous Coward

      Re: AMD

      MS might be interested in an acquisition...

      1. toughluck

        Re: AMD

        Unfortunately, at least in theory, AMD loses its x86 license if it is bought by a third party.

    2. toughluck

      About baking x86 emulation into ARM

      No need to do that. AMD can simply manufacture a double quad core. Four x86-64 cores and four ARM cores. Running on ARM exclusively most of the time, running x86 code emulated if needed, and switching on x86-64 cores only if more performance is needed. The only problem is cost, I don't think such a CPU would cost less than $100, and that really precludes all low-end segments and much of midrange, too.

  19. Volker Hett

    It'll be great, so true!

    As if X86 emulation worked wonders, ever. See Dec Alpha and Intel Itanium for reference.

    1. toughluck

      Re: It'll be great, so true!

      For Alpha, it worked perfectly fine, it was already mentioned above in another thread (~80% native performance).

      As for Itanium, when Intel was designing IA-64, their decisions made IA-32 very costly to emulate. Not even remotely comparable.

  20. GrapeBunch

    Zilog

    The earliest Intel competitor I remember is Zilog. Their Z80 (which powered my first computer, a QDP-100) was a better 8080. But I looked it up, and Zilog's greatest processor Z8 (count the zeros: four) was not compatible with Intel's 32-bit instruction set. Just in case we're in why-spend-kazillions-on-litigation-when-we-can-buy-a-defunct-product-for-a-song mode.

    I've preferred Intel to AMD because of the perception of better energy efficiency. But reading this thread, I'm getting the impression that Intel is more efficient when the processors are idle. ? My personal embarrassment: +1.

  21. toughluck

    Software emulation has already been done.

    Bochs did it to a lesser extent, but more recently, DOSBox does full x86 emulation.

    If Intel wins this suit against Microsoft, does that make DOSBox illegal, too?

  22. fghdfgh

    Patents are only good for 20 years. So, it's perfectly legal for Qualcomm and Microsoft to emulate Intel's 32-bit architecture. I bet that's why they're only supporting 32-bit processes on the emulator. The 64-bit instructions are still patented, as are SSE. So, they can't add support for those, yet.

    1. 520

      it was already perfectly legal; Bleemcast vs Sony Computer Entertainment had already set this precedent.

    2. toughluck

      64-bit instructions are owned by AMD. I suspect AMD would be perfectly happy to license them to Microsoft.

      1. kain preacher

        I'm sure AMD would even be willing to give that tech for free in exchange for MS optimizing their code to run on AMD chips.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like