back to article Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo

Linux kernel king Linus Torvalds this week dismissed cross-platform efforts to support his contention that Arm-compatible processors will never dominate the server market. Responding to interest in Arm's announcement of its data center-oriented Neoverse N1 and E1 CPU cores on Wednesday, and a jibe about his affinity for native …

  1. Anonymous Coward
    Anonymous Coward

    "Torvalds abandoned his commitment to civil discourse.."

    Actually, his post seemed quite civil, particularly by Linus standards. Does one "bullshit" in a statement now make it uncivil?

    1. DougS Silver badge

      Re: "Torvalds abandoned his commitment to civil discourse.."

      Saying a general concept "is bullshit" is very different from calling out an individual as he had in the past. If idiots (oops, I guess I'm as bad as he is!) are calling him out for this, he might as well retire because apparently agreeing with everyone and avoiding profanity is the only thing that will satisfy some people.

      1. randon8154

        Re: "Torvalds abandoned his commitment to civil discourse.."

        Personal opinion : Never been choc or never found him particularly insulting by his way of talking.

        "agreeing with everyone and avoiding profanity is the only thing that will satisfy some people."

        This is sadly the behavior to have today, not long ago, I have the misfortune to trigger a reaction over proportionate for placing 1 "bullshit" in a of 2 hours argument with some dev on their IRC channel (widely used distro). The reaction wasn't natural, too pathetic for be true.

        But it is useful to get rid of disturbing question...

      2. Captain Scarlet Silver badge
        Coat

        the only thing that will satisfy some people

        I don't know why but I would like a Linus vs Bill Gates edition of Celebrity Death Match.

        That would satify me!

    2. diodesign (Written by Reg staff) Silver badge

      "his post seemed quite civil"

      Fair point: happy to tweak that.

      C.

  2. Anonymous Coward
    Anonymous Coward

    These days a lot of us self employed programmers do our dev work on Raspberry Pis.

    1. Charles 9 Silver badge

      But hobbyist programmers and coders "on the coalface," so to speak, are two different worlds. Torvalds has a point that Intel has tremendous amounts of institutional momentum to its advantage, especially in the server world. I too have developed a healthy skepticism about new technology announcements (like post-NAND nonvolatile storage/memory), given how much has ended up vaporware or is stuck in "slow time" zone. Like him, perhaps my sentiment is best stated as, "Call me when you have actual product."

      1. Doctor Syntax Silver badge

        "Call me when you have actual product."

        I'm sure that DEC had a similar attitude. It's one that allows someone else to sneak up on you and not notice them until it's too late.

        1. Charles 9 Silver badge

          OTOH, jumping at any announcement that comes along can have you wasting money tilting at windmills. You lose either way.

        2. Andrew Commons

          DEC HALs

          DEC were certainly users of hardware abstraction layers. They had one in VAX/VMS I think, it was always instructive to go through the BLISS header files looking at the comments to see next years models emerging.

          1. Fred Goldstein

            Re: DEC HALs

            VAX/VMS did not have a HAL; it was written for the VAX architecture. The early VAXen did have writeable control store, the PDP-11 instruction mode, and some other obscure features, but eventually they settled on the MicroVAX native instruction set. Maybe the later "OpenVMS" had some HAL features, but I left DEC before then. It was the Alpha chip that did interesting abstractions in hardware, and could be optimized for VMS, Unix, or NT.

        3. Phil O'Sophical Silver badge

          I'm sure that DEC had a similar attitude.

          Sun, too. They, at least, should have learned since they did it to DEC.

      2. Anonymous Coward
        Anonymous Coward

        "Intel has tremendous amounts of institutional [money]"

        "Torvalds has a point that Intel has tremendous amounts of institutional momentum to its advantage, especially in the server world"

        Historically, that may have been a reasonable point. Is there any evidence it's still true? Don't take my word for it, don't take Linus' word for it, why not look at the record of former Intel CEO, Brian Krzanich (see below).

        Intel has historically had tremendous amounts of dosh swilling around, and little corporate clue as to what to do with it (at least in engineering terms, obviously loads of it ends up in co-op marketing funds and "contra revenue" to subsidise "mobile x86" for phones and so on, and on failed ventures and weird acquisitions, not just the obvious like Mcafee, but also projects such as WiMax and products such as VxWorks RTOS and SIMICS system-level simulations and the KAP source to source optimiser, and so on). Intel HQ actually owned a set of Arm engineers and implementation for a while, but didn't do much with it (a few people may have heard of StrongARM and its Intel follow-on, e.g. the IXP4xx comms processor range). But I guess most people won't have heard of them, and that in itself should tell you something.

        Many people will have heard of IA64 aka Itanic, which in prehistoric times was supposed to be the future of 64bit computing. But it was a dream that turned into a nightmare, and that should also tell you something.

        As it turned out, the future of 64bit computing wasn't Intel's IA64 (Itanium), it was AMD's AMD64 *product* (which was not slideware like IA64 was at the time). IA64 inevitably ended up as a footnote on the history of 64bit technology, and one day someone will write a book on how even Intel's cash mountain couldn't make IA64 succeed against AMD64 and the rapidly-released AMD64 equivalents (something which Intel HQ had repeatedly said in public wasn't going to happen, because it couldn't be done).

        And now, IA64 is dead. Which is fine by most of us. And that should tell even Linus that Intel's webscale money can't turn a pig's ear into a silk purse.

        Meanwhile, there are far more ARM/Arm products around the world, running far more diverse applications on far more diverse OSes, than Linux/x86 ever will run on. What keeps Intel alive is IT departments and the corporate fascination with Windows.

        Intel. The x86 company. Sell (a share that's not been good enough for Intel CEO Krzanich to hold on to might not be a wise medium term investment for anyone else either:

        https://www.fool.com/investing/2018/06/15/revisiting-intel-ceo-brian-krzanichs-huge-stock-sa.aspx

        1. doublelayer Silver badge

          Re: "Intel has tremendous amounts of institutional [money]"

          For most people, including IT people and developers, the specific processor type they have is not particularly important, with the main question being whether the chip can provide the performance they need. This is probably different for those doing kernel work, but above that, things matter significantly less.

          If you ask a member of the public which ISA the processor in their device runs, they have no clue. If you ask a person who deploys code onto a device, they know what ISA is involved, but they probably don't know which company made it or which specific version it is (did the code I ran yesterday run on an intel or AMD box? I don't know because it didn't matter).

          A product is certainly necessary to get into this business, but AMD managed it when they didn't have much market share, and someone else with a good enough product can as well.

          1. katrinab Silver badge

            Re: "Intel has tremendous amounts of institutional [money]"

            In my workplace, the software we use calls for an AMD64 compatible CPU, so that's what we use.

            Why do our software suppliers chose to build for the AMD64 architecture? Mainly because that's what most of their customers use.

        2. Anonymous Coward
          Anonymous Coward

          Re: "Intel has tremendous amounts of institutional [money]"

          Intel has historically had tremendous amounts of dosh swilling around, and little corporate clue as to what to do with it (at least in engineering terms, obviously loads of it ends up in co-op marketing funds and "contra revenue" to subsidise "mobile x86" for phones and so on, and on failed ventures and weird acquisitions, not just the obvious like Mcafee, but also projects such as WiMax and products such as VxWorks RTOS and SIMICS system-level simulations and the KAP source to source optimiser, and so on). Intel HQ actually owned a set of Arm engineers and implementation for a while, but didn't do much with it (a few people may have heard of StrongARM and its Intel follow-on, e.g. the IXP4xx comms processor range). But I guess most people won't have heard of them, and that in itself should tell you something.

          Many people will have heard of IA64 aka Itanic, which in prehistoric times was supposed to be the future of 64bit computing. But it was a dream that turned into a nightmare, and that should also tell you something.

          As it turned out, the future of 64bit computing wasn't Intel's IA64 (Itanium), it was AMD's AMD64 *product* (which was not slideware like IA64 was at the time). IA64 inevitably ended up as a footnote on the history of 64bit technology, and one day someone will write a book on how even Intel's cash mountain couldn't make IA64 succeed against AMD64 and the rapidly-released AMD64 equivalents (something which Intel HQ had repeatedly said in public wasn't going to happen, because it couldn't be done).

          And now, IA64 is dead. Which is fine by most of us. And that should tell even Linus that Intel's webscale money can't turn a pig's ear into a silk purse.

          There is a lot of momentum behind use of x86 - which just goes to show how lazy a lot of organisations are. There is a competitive advantage to be had from being able to switch architectures quickly and easily, it's just that for a long time there's been no better architecture to switch to. I'm convinced that this has been the case for so long that it's been beaten out of the industry's mindset, and only now are there glimmers of signs that people are interested in looking around for something new.

          I share your thoughts on how Intel has gone about its business, but I would like to cut them a tiny amount of slack on IA64. There is so much cruft built up in the world of x86 we really could do with a wholesale clear out, start again. IA64 would have been a good opportunity to force that through, make us all happier. It didn't pan out that way of course...

          Personally speaking I dislike how Intel have kinda parked VxWorks - it's a fine RTOS, I did some nice work with it, and could have benefited from some more investment to make it sing strongly on Intel's chips. AFAIK that didn't happen - Intel put resources primarily into making Linux work well on their multi-core architectures. Makes one wonder what's going to happen to Altera...

          I'm interested in the language Rust from Mozilla. It's suitable as a systems language (there's even an OS written in it). What I don't know is whether or not Rust is a better standard than C, C++, specifically can variables be trusted to be exactly the same on all platforms? I'm referring to C allowing ant int to be any size, etc. My thinking is that a systems language that gives more tightly defined behaviour across more platforms make architecture swapping less problematic.

          1. Michael Wojcik Silver badge

            Re: "Intel has tremendous amounts of institutional [money]"

            I'm referring to C allowing ant int to be any size

            No it doesn't. ISO 9899 requires int to be able to hold at least the integer values from -32767 to +32767. See 9899-1990 5.2.4.2.1, and similar in later editions of the standard.

            C gives great freedom to the implementation, particularly freestanding implementations; but it does impose some requirements.

            A system language which fixes the size of base types could be problematic for performance and compatibility. That's why C doesn't give the integer types a fixed size; even C's byte is implementation dependent (and not necessarily an octet).

            1. Anonymous Coward
              Anonymous Coward

              Re: "Intel has tremendous amounts of institutional [money]"

              @Michel Wojcik,

              Whilst that's all true of C, pretty much the first thing that anyone does when coding for portable C is start using things like int32_t, etc. There's clearly a desire amongst programmers for a portable systems language, and yet they have to use coding discipline to achieve that with C / C++.

              A systems language that looked after that for you, even if that was problematic for the compiler writer and didn't necessarily run like greased lightning on every single CPU, could be extremely useful. It'd be trading off the opportunity to reach outright maximum performance on some CPUs against the time taken for devs to get some code running as intended without having to examine the intricacies of the specific CPU they're putting it on. I sense these days that the latter is more important than the former. I don't know if that's Rust, but I'm interested enough to know if it could be.

              Rust itself is quite appealing; the fact that it catches so many common memory abuse bugs at compile time sounds like a great time saver.

              1. elip

                Re: "Intel has tremendous amounts of institutional [money]"

                You may want to look into Go for a 'systems-programming' langauge, easy and relatively painless for some cross-compilation. Rust catches memory abuse bugs, so you can waste your time dealing with other issues you didn't have to deal with with C compiled programs.

        3. the spectacularly refined chap

          Re: "Intel has tremendous amounts of institutional [money]"

          As it turned out, the future of 64bit computing wasn't Intel's IA64 (Itanium), it was AMD's AMD64 *product* (which was not slideware like IA64 was at the time)

          Intel released Merced in June 2001. The first AMD64 chips were released in April 2003. You make soem good points but chosen to base your whole argument on a factual inaccuracy.

          1. Anonymous Coward
            Anonymous Coward

            Re: "Intel has tremendous amounts of institutional [money]"

            "You make soem good points "

            That's what we're here for.

            "Intel released Merced in June 2001.The first AMD64 chips were released in April 2003. You make soem good points but chosen to base your whole argument on a factual inaccuracy."

            Rather depends on how you want to define "released" vs "slideware/relevant/etc)" (and arguably on my oversimplification, for which I apologise). More detailed info can be found via e.g.

            https://en.wikipedia.org/wiki/IA-64#Itanium_2:_2002%E2%80%932010

            When did IA64 become relevant in usable affordable competitive offerings (in systems with applications, stuff that volume customers were willing to pay for with their own money), rather than Intel/HP slideware? Merced never really was, things did briefly improve later, but by then AMD64 was well established.

            AMD64 architecture was announced in 2001 and started shipping in relevant systems (which were fundamentally x86-compatible) around 2003; look it up. Once that happened, the writing was pretty much on the wall for IA64.

            Don't take my word for it, ask AMD's lead architect for K8, who left AMD to later work at Apple and elsewhere, and more recently has been recruited to sort out chip design and manufacture problems at Intel. In between times he also co-architected HyperTransport and similar stuff.

            https://venturebeat.com/2018/07/16/why-rock-star-chip-architect-jim-keller-finally-decided-to-work-for-intel/

    2. Daniel von Asmuth Bronze badge

      x86 is broken windows

      Is it time for an ARMy of Archimedes desktops?

      1. Korev Silver badge
        Joke

        Re: x86 is broken windows

        That was Acorny joke...

        1. Danny 14 Silver badge

          Re: x86 is broken windows

          surely thats a RISCy proposal?

  3. Anonymous Coward
    Anonymous Coward

    Currently I deveop “at home” on ARM64 and deploy on x86 in the “cloud”.

    No problem at all being a different architecture.

    But now that AWS has ARM instances, I’ll be able to be ARM-everywhere.

    Personally I’ve had no respect for Torvalds since his comment that ARM SoC designers - of which I was once one - should die. I think he’s a bit of a git but I do not believe he should die for that. I just wish he’d shut up.

    1. Paul

      I do welcome the competition in the market, as I think there are many workloads where, say, an Intel Atom would have sufficient performance, and so an Arm would be too.

      But if you look at the price/performance ratio of an AWS instance running Arm, it's not really different from an x86 server.

      I tried to deploy some of $WORK's requirements on an Arm/Graviton instance, but other than the simplest service, I got into dependency hell, with some packages simply not ready built for arm.

      1. Doctor Syntax Silver badge

        "Intel Atom would have sufficient performance, and so an Arm would be too."

        Intel have constraints there that ARM doesn't: they don't want Atom eating their other lines' lunch.

      2. Phil Endecott Silver badge

        > I got into dependency hell, with some packages simply not ready built for arm.

        I’d be interested to hear what distribution you were using, and what languages.

    2. Gordan
      Flame

      @AC 02:49

      I agree with you on the first part, but the point you make about Linus' comment on ARM SoC manufacturers is misguided. He was absolutely right in what he said, in the context it was in. The simple fact is that various SoC manufacturers produced appalling code to make the kernel work on their SoCs, they hardly ever bothered to upstream it, and were responsible for a situation where there was a total of one kernel that ever works on a particular SoC, which is then immediately orphaned and will never receive any further fixes for various issues that are later discovered in that branch.

      You don't have to look far for evidence if this - just look what version of the kernel various embedded devices are using, and look at the last triplet of the version number. It is never even the latest mainline version for that kernel series, let alone a something that us supported for any length of time after the device is shipped.

      Fire icon because: "Worked in dev. Ops problem now."

      1. Dan 55 Silver badge

        Perhaps if he hadn't said "portability is for people who cannot write new programs" and hadn't had a go at MINUX, he wouldn't be in this situation. It took a lot of work to convert Linux from an i386 OS to work on other CPUs, one day he might work out more modular kernels aren't such a bad thing either.

        1. Gordan

          Again, different context. The problem with the ARM SoC manufacturers is that they produced terrible code that just about compiled, threw it over the fence at device manufacturers and walked away. If they stood by their product enough to get their code cleaned up and upstreamed, they would have been welcomed rather than berated.

          It is only now, years later, that things have improved, because of pressure from the device manufacturers. They realized that their customers want security updates for their devices, and that meant that they started using best supported SoCs rather than the most cost effective ones. That finally kicked off the competition among the SoC manufacturers and got them to own their code, at which point it was cheaper in the longer term to actually do things right, clean up their code, and get it upstreamed.

          The original point stands, though - the SoC manufacturers at the time of Linus' infamous rant absolutely deserved his ire.

  4. M.V. Lipvig

    There is something to be said about this. A company I used to work for used to develop and test circuit test system upgrades on something other than Windows machines, slap the upgrades in to the operational environment, then we who had to use these systems on Windows machines spent about a week trying to do our jobs on crashing test platforms until they worked out the bugs. It took about 2 years to get them to start testing the software on Windows machines, after which rollouts became seamless. If you aren't developing the apps on the same kind of machine your users are on, test it on the same kind before rolling it out.

    1. oiseau Silver badge
      Pint

      If you aren't developing the apps on the same kind of machine your users are on, test it on the same kind before rolling it out.

      Indeed ...

      Really sounds like basic common sense to me.

      And I'm not a programmer.

      Have a beer and a good-weekend.

      A.

    2. Phil Endecott Silver badge

      > If you aren't developing the apps on the same kind of

      > machine your users are on, test it on the same kind

      > before rolling it out.

      That’s obvioisly true, but in the context of the article the difference between Windows and Linux is much greater than the difference between Linux/x86 and Linux/ARM64.

      1. Anonymous Coward
        Anonymous Coward

        “That’s obvioisly true, but in the context of the article the difference between Windows and Linux is much greater than the difference between Linux/x86 and Linux/ARM64.”

        Is that correct?

        Linus as looking at things from a low level perspective - how can he obtain stability, performance and security from the architecture to deliver an OS or the tools to produce that OS rather than necessarily application layers although there is some overlap.

        x86 has a large install base and significant business investment to drive it, particularly as complexity has increased and halted hardware development on other platforms (ie, Itanium) or significantly slowed development (SPARC and MIPS). Intel invests a lot of money in compiler and library improvements and documenting chips to allow others to offer solutions in this space - in the ARM market, many of the performance issues are addressed by co-processors which then makes supporting the SoC that little bit more difficult as many of the coprocessor interfaces are not well supported, making them difficult to exploit at the OS/tool chain level. It can be done, but often adds 2-10 years to the reach the maturity/stability of features delivered on x86 in a few years.

        ARM has focussed on low power and while that provides its own set of challenges, getting from ARMs current performance level to x86s performance in the server space will be challenging when we are very close to the end of the current process miniaturisation cycle (2-3 gens at each of 7nm and 5nm with no clear way beyond that). Many of the advances ARM is making in the server space have been tried before (most notably SPARCs high core counts) and ARM has to improve the cache coherency and interconnect issues that limited those solutions.

        People are also forgetting Linus’s history - his involvement in Transmeta, his publicly stated preference for POWER and that Linus has been directly or indirectly involved in developing cross-platform for a significant part of his career.

        There are niche markets for ARM in the server space - the question is more around can it evolve beyond those niches and challenge the market share of SPARC or POWER in the general purpose space. I suspect we will know based on Apples success moving from x86 to ARM before we will see other vendors produce competitive ARM server solutions because of the resources required to achieve success.

        ARM will continue to have considerable market success, even in data centres where it is likely to replace solutions using non-x86 architectures, particularly MIPS. But it won’t replace x86 servers and if something does replace x86, I suspect we haven’t seen it yet (ie. quantum)

        1. ROC

          Intel server security/performance???

          Security, performance, and Intel do not seem to go together so well these days considering Spectre and Meltdown (one of which Google just advised probably can never be fully fixed - don't remember which), and the fixes available so far are big performance hits, which is especially impactful for servers.

    3. damiandixon

      Any professional programmer will test on all the supported targets. If you have set up your build & test system correctly then the overhead should not be unmanageable.

      When I did development for Java I also test on the target JVM (developed on the Sun/Oracle JVM, deployed on the IBM z/OS JVM with some interesting bugs too).

      1. Anonymous Coward
        Anonymous Coward

        "When I did development for Java I also test on the target JVM (developed on the Sun/Oracle JVM, deployed on the IBM z/OS JVM with some interesting bugs too)."

        Out of interest, where did the bugs lie? OS level? Library level? Performance issues with platform X? Method changes to do things a different way on certain platforms or workaround library issues? Genuine bugs in your code?

        My experience supporting older platforms is that as they get older, you find a lot of performance quirks in the older code. i.e. if you are running a production Unix platform with a Linux development platform, often a version upgrade across the platforms gives you a 5%-10% performance boost on x86 that isn't matched on the non-x86 platform even though the hardware remains similar (i.e. same CPU, just more memory or storage changes) i.e. by upgrading math libraries with a new application release. Speaking with the vendor about the increase, you realise they didn't do anything specifically to improve performance other than upgrade their tool chain.

        1. matjaggard

          I did the same and for us the bugs lay 80% in the higher layer and 20% in the lower ones at all layers. 80% in the Java code, 20% below that. Of those, 80% in the JVM, 20% below that. Of those, 80% in the OS, 20% in the architecture.

      2. Michael Wojcik Silver badge

        Any professional programmer will test on all the supported targets

        That's infeasible, unless you endorse the Texas Sharpshooter Fallacy - that is, you have to define "supported targets" as "things I test on".

        One product I work on runs on Windows, a number of Linux distributions, and a collection of UNIX platforms. We test on dozens of machine configurations. That's no where near the full list of possible "supported targets". Should we tell customers "yes, we have an AIX product, but it only runs on AIX VMs hosted on this particular hardware configuration"? "Yes, we support SUSE Linux for z, but you have to run it on a z machine of this class - nothing bigger, no coprocessor systems attached, no Sysplex..." Most of our Windows hardware is supplied by a handful of vendors - do we not support Windows systems running on hardware from other vendors?

        Most commercial software has to be tested on no more than a small subset of possible customer configurations - even software for single-vendor platforms like IBM i and z, or Apple's iOS. (iPhone app makers: how many different iPhone models do you test on?)

  5. Gene Cash Silver badge

    > until we actually see widely available hardware that people actually can use for development and deployment

    You mean like the half dozen Raspberry Pi I have doing various things around the house?

    I think multiple active different architectures makes the kernel more robust and avoids the monoculture that I think is hurting Microsoft and Apple.

    Thinking "Everything's a IBM 360/DEC VAX/PDP-11/x86/SPARC" is bad.

    1. Dabbb Bronze badge

      He means ARM workstations or laptops.

      You can develop on Z80 if you want to, that does not make Linus's argument any less valid. Tere's simply no rational reasons to run ARM servers.

      1. -tim
        Coat

        Tere's simply no rational reasons to run ARM servers.

        Unless you want an extra security layer that consists of "something that isn't x86". The ARM architecture is much more difficult to implement some types of common x86 attacks. Some type so return oriented programming type hacks are extremely difficult. Some architectures use hardware stacks so there is no addresses of many things on the stack so they can't be accessed that way at all which is a very common x86 attack.

        1. Dabbb Bronze badge

          Re: Tere's simply no rational reasons to run ARM servers.

          There's SPARC, MIPS and POWER if that's your reasoning, some of them even cheaper than comparable x64 servers.

          But none of them able to compete to x64, what makes you think ARM will be any different ?

          1. Gordan

            Re: Tere's simply no rational reasons to run ARM servers.

            ARM is different because it has worked it's way up from being the underdog, from embedded devices where it is unstoppably ubiquitous, through laptop grade hardware up to server grade hardware more recently. I have an ATX form factor ARM board under my desk with 128GB of RAM and a couple of PCIe x16 slots, with a clock-for-clock performance per watt on server loads that is about double compared to a reasonably equivalent Xeon, and very comparable cost, too.

            Previous competitors like SPARC, POWER, MIPS and Alpha fought with very little underlying deployment base. Unlike those, ARM is _everywhere_, and is not going away. Intel tried and failed to compete in the market where ARM dominates. ARM is eating it's way up toward server deployments like creeping doom.

          2. Down not across Silver badge

            Re: Tere's simply no rational reasons to run ARM servers.

            But none of them able to compete to x64, what makes you think ARM will be any different ?

            Compete with which metric? Pretty sweeping statement without knowing what someone's requirements might be.

        2. mark l 2 Silver badge

          Re: Tere's simply no rational reasons to run ARM servers.

          Well unless your a big player who buys servers buy the truck load, your Facebook, Google, etc. I expect they have seriously looked at it from a financial and supply aspect to look at replacing Intel with ARM. Using ARM means your not stuck with relying on one manufacturer to supply CPU's who can increase prices as they like or might have supply issues.

          1. Anonymous Coward
            Anonymous Coward

            @mark l 2 - Re: Tere's simply no rational reasons to run ARM servers.

            The same arguments might be presented for MS Windows + Office mono culture. We all agree it was, is and always be bad but we've adapted.

            1. Charles 9 Silver badge

              Re: @mark l 2 - Tere's simply no rational reasons to run ARM servers.

              The problem with Office is compatibility. Everyone uses it, and no one except Microsoft can really speak 100% Office correctly (read: bad format conversions and inability to port critical scripts). The Office problem is a lingua franca problem, meaning monoculture of some form is a necessary evil simply because it's required for effective communication; otherwise things get "lost in translation".

            2. Anonymous Coward
              Anonymous Coward

              Re: @mark l 2 - Tere's simply no rational reasons to run ARM servers.

              "The same arguments might be presented for MS Windows + Office mono culture. We all agree it was, is and always be bad but we've adapted."

              --------------------

              In a large part, we've adapted by moving away from Windows and Office. I haven't run MS Office on one of my own computers for a decade or more, and have no trouble moving files back and forth between my work computers (mostly with MS Office on the non-server machines) and my own computers.

              After reading the Windows 10 EULA, I did the sensible thing with the first computer I had to buy with Windows 10 installed and no rollback - I reformatted the hard drive with boot partitions for three versions of linux (current working, previous working, experimentation) and a data partition. This was inevitable, given the total mandated insecurity and total lack of privacy with Windows.

              I will never pay for a computer that is totally controlled by Microsoft.

              If I ever find something that I absolutely must run, and which absolutely needs windows, it will be in a tightly secured VM behind a very restrictive virtual firewall, if it allowed to talk to the outside at all... and it will have no access to my main data partition.

              If that cannot be done, for any reason, it will be a sacrificial machine used for only that software, and behind a separate hardware firewall on an isolated network. Any traffic will be funnelled safely past the local network on a VLAN, or in a VPN tunnel established between the firewall and the outside world.

              It's the only way to be sure.

              1. Philippe

                Re: @mark l 2 - Tere's simply no rational reasons to run ARM servers.

                I thought "nuking it from space" was the only way to be sure?

                1. Charles 9 Silver badge

                  Re: @mark l 2 - Tere's simply no rational reasons to run ARM servers.

                  Not unless you're dealing with an Andromeda Strain: nukes would only make it stronger.

          2. Pier Reviewer

            Re: Tere's simply no rational reasons to run ARM servers.

            “Using ARM means your not stuck with relying on one manufacturer to supply CPU's who can increase prices as they like or might have supply issues.”

            ARM don’t produce CPUs. They licence their tech to other people who usually modify the base design and get it fab’d. In other words, you can’t necessarily swap one ARM chip out for another. So yes, with a server grade CPU in reality you probably are stuck with one supplier.

            Linus is talking about *servers* here, not general purpose computing. It’s a very different world. Userland devs don’t care about architecture. It’s abstracted by the kernel. Ergo kernel devs do care architecture as they’re the boys and girls doing the abstracting. ARMs relative addressing tends to make large blobs such as kernels even larger for example. You need trampolines all over the shop, especially for read+write data (unless you think marking instruction pages as writable to keep the data close by is in any way sane...)

            Then there’s the issue with JIT. Incoherent instruction and data caches make it expensive.

            I like ARM (I prefer PowerPC for RISC, but that’s a subjective opinion). I hope it grows in the desktop/laptop market in particular. However I can understand Linus’s opinion on *servers*. It’s a very specific market he’s talking about. He’s not talking about desktops/workstations etc.

      2. Doctor Syntax Silver badge

        "Tere's simply no rational reasons to run ARM servers."

        And are there any rational reasons not to?

      3. Anonymous Coward
        Anonymous Coward

        "You can develop on Z80 if you want to, that does not make Linus's argument any less valid. Tere's simply no rational reasons to run ARM servers."

        This is completely backwards.

        If we can get ARM servers that don't have invisible privileged cores that cannot be monitored or validated, it may become possible to run on machines that we can trust. The hidden service processors in Intel chips means you can never be sure your hardware is not compromised.

        Moving to servers that are the hardware equivalent of 'open source' where everything can be inspected and validated is something far more likely on an ARM implementation than Intel.

      4. Steve Todd

        "Tere's simply no rational reasons to run ARM servers."

        Other than the facts you can get a given amount of CPU horsepower for less money and using less power?

        For many workloads those are compelling reasons. That's how x86 got its foot through the door. It wasn't very good, but it was cheap and fast enough.

    2. vtcodger Silver badge

      DEC built their business on excellent machines that were substantially cheaper to deploy than mainframes. And the reason they aren't around any more is that PCs were substantially cheaper than DEC's offerings and eventually became as capable as VAX and PDP and DEC's other offerings. It was pretty obvious even in 1990 that, barring a miracle, DEC was doomed. If ARM eventually becomes as capable as X86 and offers substantial cost savings, it'll eventually come to dominate server space, and workstation space, and every other space. But it's far from obvious that ARM can/will be much cheaper/better than X86 in the very long run.

      And for this year, and next, and the year after, Torvalds is right, For the time being, all other things being equal, X86 platforms are a bit less risky from a business point of view than ARM.

      I'm no fan of X86 BTW, It'll please me no end if that shambles is eventually replaced by something less ugly.

      1. Warm Braw Silver badge

        DEC built their business on excellent machines

        And software development productivity: it was far faster to write code in a VMS environment than it was in an IBM shop, generally, and networking and clustering not only faciliated deployment, but also compensated for the relatively small range of hardware. For networking and clustering these days read "cloud", but Microsoft's development tools are still generally very good and they'll probably keep x86 flying for some time yet, though the day will finally arrive when .NET developers, at least, aren't going to care much about the target architecture.

  6. Mikel

    Far from the roots of Unix have you fallen dear Linus

    The whole point of Unix was that monopolistic hardware makers would lock in their customers' data (and programs... Programs are data), making them hostage to the vendors' sales team (and the vendors' inevitable collapse). The relevant decision was that the end user's commitment was to his own data - the data is the business case for the technology. The user owns his data, and can choose to manipulate it only with tools he can take anywhere. And so C was invented in a way that a microscopic bit of compiler could be hand built on whatever new architecture you wanted to adopt, and you could use that to built an optimizing compiler from the plain text source. After that you could compile all of your common utilities from text - and if the system didn't offer the desired features you could port the kernel over and run on that.

    Migration. The end user's data and business logic belongs to them, and letting them maintain ownership and control of it is -the point-. Linux's raison d'être. Otherwise we could just use whatever. And lose our data over and over - as in the days of yore when we wore an onion on our belt as was the fashion at the time.

    /Wow it has been a long time since I've been here.

  7. JLV Silver badge

    interesting.

    Nowhere near clever enough about hardware to opine about it. But, from working with SQL at an ANSI-SQL level, being forced to think beyond your particular underlying flavor actually is beneficial in terms of abstraction and code quality. If you're going to do cross/multi-platform, it behooves you to unleash the full fury of your integration tests on whatever system your dev team is NOT using. I've seen this happen before, wandering off your dev RDBMS for your QA RDBMS means a very low rate of platform-specific bugs. Doing QA on DEV platform => deadly.

  8. A Non e-mouse Silver badge

    First They Ignore You,

    Then They Laugh at You,

    Then They Attack You,

    Then You Win

    1. DavCrav Silver badge

      "First They Ignore You,

      Then They Laugh at You,

      Then They Attack You,

      Then You Win"

      You say this, but this is survivor bias. There are far more concepts and ideas that don't get past stages 1 and 2 than make it to stage 4. For example, Nike's new shoes that need an app to do them up. I don't need to attack them for it, because I'm still at the laughing stage. And I can confidently predict that I really, really hope they don't win.

      See also, IoT locks for your front door. That's a stage 3 idea. It's at the full-blown attack stage because enough people decided to buy one that it's now an issue. In the far future, they will reach stage 4, but not until they are much better.

      1. cat_mara

        Speaking of those Nike shoes...

        ... I bet "my left shoe won't even reboot" was a phrase you never expected to see outside of an Onion article, right?

        1. Fred Goldstein

          Re: Speaking of those Nike shoes...

          Let's destroy the metaphor completely. Nike's job will only be done when they put their self-lacing technology onto boots. Only then will the bootstrap process really involve bootstraps.

          1. Danny 14 Silver badge

            Re: Speaking of those Nike shoes...

            and also a time you dont want GRUBs anywhere near your bootstrap.

          2. jelabarre59 Silver badge

            Re: Speaking of those Nike shoes...

            Let's destroy the metaphor completely. Nike's job will only be done when they put their self-lacing technology onto boots. Only then will the bootstrap process really involve bootstraps.

            "Bootstraps" are the tabs ('straps') on the sides or back of boots to assist in pulling them on, not the laces (hence the Horatio Alger pre-meme meme of 'lifting yourself by your own bootstraps', a phrase likely never used in the actual books).

            "Bootstrapping bootstraps would likely involve boots that put themselves on your feet. (and why does that put me in mind of Steve Martin's routine of "The Cruel Shoes"?)

  9. StargateSg7 Bronze badge

    I've looked at over 100 different CPU/GPU/DSP architectures in 30 years+ of programming and what I DID is create one single CUSTOM VIRTUAL INSTRUCTION SET and compile all my C/C++, Pascal or Basic source code to that Virtual Instruction Set which includes ONLY the most basic CPU/GPU/DSP instructions and Bit-Widths (i.e. a Virtual RISC machine) and and then let my compiler's back-end cross-compile processor de-construct that virtual instruction set to whatever chip-specific or hardware accelerated instruction set my make file and compiler flags are set to, be they x86, ARM, MIPS, SuperSPARC, A12, 68000-series, 6502, Z80, SHARC, IBM Power7/8/9/10, DSP, MCU, MPU, FPU, GPU, ASIC, even a low-level cheap PIC!

    This means by using ONE SINGLE make file, I can set my compiler output to ANY 8-bit, 16-bit, 32-bit, 64-bit and 128-bit CPU/GPU/DSP chip I want and my compilers AUTOMATICALLY optimize the Virtual RISC Instructions I output to very-processor-specific instruction sets. Since ALMOST ALL chips have the same TYPES of BASIC assembler instructions, I've been able to keep a library of simple, pre-defined and HIGHLY OPTIMIZED assembler sub-routines which substitute for my Virtual CPU/GPU/DSP instructions. Any larger bit-widths or media-specific instructions NOT available on the target system, get truncated/merged downwards to fit on an automated basis. I've been able to compile and run a TEXT-based "windowing" system on an 8-bit 6502 and a 16-bit 80286 using the SAME source code designed for our custom 128-bit GaAs chip because the cross-compiler automatically takes care of the limits of the destination processor and MAKES THINGS FIT into available RAM/DISK and CPU speed by changing graphics and windowing into lower resolutions, bit-depths and even text-mode if necessary! It even takes care of Flat mode and Segmented Memory, unusual file/disk access systems, real-mode vs Virtual mode switching, hard interrupt vs cooperative processing, IP V4/V6 local network and global communications, and other quirks of 1970's era to 2010's era common CPU/GPU/DSP chips!

    The earliest Pascal and Fortran compilers used to do this. They originally output p-Code which was a virtualized instruction set which was then de-constructed into chip-specific assembler by the back-end compiler on an at-runtime basis, or as a final executable file depending upon your compiler flags. I do the same thing BUT I have highly optimized back-end assembler for over a 100 CPU/GPU/DSP chips which means my source code is FULLY multi-platform capable with high-end real and virtual memory management, virtualized desktops, communications, file and disk handling, windowing, graphics, database, toolbars and back-office objects.

    As a New chipset comes along, I merely make a Textfile-based Lookup Table (LUT) which maps my virtual instruction set to the optimized and error-checked-and-error-trapped chip-specific assembler code. And since I only use a basic 150 virtual instructions specific to CLASSES OR TYPES of chips such as a general purpose single-to-multi-core CPU, a GPU or a DSP-only chip, it takes me barely a week to two to add the NEW processor to my cross-compiler which handles C/C++, Pascal and Basic as the front-end source code.

    .

    1. fnusnu

      You're Steve Gibson and I claim my £5

      1. TKW

        I up-voted you, but he'd never use anything other than each platform's native assembler

    2. -bat.

      well done

      you've invented llvm :-)

      (impressive achievement for one person though, has to be said!)

      1. StargateSg7 Bronze badge

        Re: well done

        I should note there are over 120 Hardware/Software Engineers working on the OPTIMIZED CPU, GPU, DSP, MCU FPU, and yes even PIC (Programmable Interrupt Controller) which have MAPPED the chip-specific instruction to EACH of my custom Virtual Instructions.

        I (and ONLY ME!) did the ENTIRE FRONT-END compiler which outputs the Virtual Instruction Set from C/C++, Pascal, Basic and YES even Fortran Source Code. The engineering teams created HIGHLY SPECIALIZED and COMPLETELY OPTIMIZED chip-specific instructions that MAP-OUT to and MATCH the intent of the virtual instructions. The text editor is standardized Brief-like which many in IDE systems tends to use, so I didn't have to worry about creating the text editing part, I just needed to hook the Brief-like text editor into my middle-ware parser and compiler.

        The ONE KEY part of all our code is that EVERY sub-routine uses Try-Exception-based error trapping so at the very lowest of hardware levels, our code will ALWAYS output a proper signed integer error and/or status code and a multi-lingual error and/or status string to a SINGLE standardized token-handling library which is automatically trapped and error-handled in upper layers. This makes our output code almost bullet-proof!

        No weird crashes or pop-up windows as we design EVERYTHING to be error-trapped and handled. There is a 3% to 5% speed penalty but we care more about runtime-safety than pure speed. We can always throw more processors and network nodes at a problem if it needs it. In fact, we have tested the compiler out to "Critical Systems" scales which allows it's use for Nuclear Systems, Aerospace Flight Control and Large-Scale Transport and Critical/Medical/Mass Machine Control applications. Multiple ISO and Mil-Spec standards are adhered to for the UTMOST in long-term uptime and CRITICAL SYSTEMS SAFETY where WE DESIGN FOR a "Fail-Gracefully" mindset of ALL HARDWARE-level and upper-level Communications and Application Layers.

        This has worked for over 15 years and it now takes the parent company mere weeks to do large-scale development for use on multiple platforms versus BEFORE that 2005 period, where the average project took TWO to FIVE YEARS to complete!

        Again, the reasoning BEHIND THIS is creating long-uptime software (i.e. DECADES without a reboot!) that reacts GRACEFULLY to hardware and software failure to such a level that we can, if necessary FAIL-TO-SAFE-STOP and/or MITIGATE-AND-CONTINUE-AT-SAFER-LEVELS for EVERY part of our code library. Our code is so safe we can drive a car/truck at beyond Level-5 AND fly a space shuttle SAFELY AND FULLY AUTONOMOUSLY !!!

        AND because we compartmentalize everything, we can actually MATHEMATICALLY PROVE and RESTRICT sub-routine inputs and outputs to specific acceptable values and ranges on an automated basis for all the lawyers and safety engineers who would like to do their due diligence on our systems.

        So again, ONE PERSON did the front-end compiler, but MANY PERSONS did the optimized and HIGHLY ERROR-CHECKED chip specific code! The parent company has technology many people would find utterly STUNNING! How many companies can create 128-bit Gallium Arsenide Substrate combined-CPU/GPU/DSP superchips that run at 60 GHz up to TWO TERAHERTZ? I would say a very very FEW...less than ten worldwide organizations I would say can do that!

        .

        .

        1. StargateSg7 Bronze badge

          Re: well done

          Now of course, WHY this type of programming is such a big deal in teh Intel vs ARM sphere, is that it REALLY SHOULD NOT MATTER which CPU, GPU or DSP you develop for!

          Your KEY ISSUE always SHOULD BE --- What does my software and hardware actually do and WHY do I need to put my specific software on a specific chip?

          The questions to ask:

          1) Does my application require so much processing that I MUST USE a desktop class or server class CPU?

          2) Is my application sensitive to local power consumption level?

          - i.e. am I running this application on machines or in geographic locations where local power availability is an issue?

          3) Does my Application require uptime on a 24/7/365 basis (i.e. Six Sigma 99.9999% uptime performance)

          - this is typical for places like large industrial plants, Hospitals and other emergency services where downtime could cause injury, death or severe damage to connected systems OR cause untold financial losses if the application is unavailable for full-on use!

          4) Is there a physical limitation where my application MUST run on a device that is small in actual size and thus requires lower power usage, lower continuous uptime, intermittent operation or smaller amounts of data transfer/usage?

          5) Is my application considered a Critical System (i.e. life and death type of system) or is it more consumer or low-intensity use?

          When you answer THOSE points above, your source code SHOULD be able to made flexible enough to SCALE UP or SCALE DOWN (i.e. in terms of using more or less of available cores and local processing speed) ......AND..... SCALE-OUT or SCALE-IN (i.e. in terms of using MORE OR LESS numbers of network nodes, entire single o multiple CPU'S, GPU's and DSP's)

          This means YOU NEED TO BE AWARE OF whether or not your application can use or even NEEDS to use single or multiple threads AND/OR single or multiple cores, or multiple CPU's, GPU's, DSP's and networks thereof!

          If you're doing mobile gaming, I highly doubt you need to render graphics for 4K/8K video display when 2.7K or even 1080p is good enough! However, If you're doing 10,000 file renders of multi-resolution videos for a broadcaster then your app probably needs to be multithread AND multi-core AND multi-network-node capable for it's video file processing!

          THOSE application specification WILL determine your final chips and processing sub-system you need to output for. Now it should be said, that YOUR CODE ALWAYS should have two-modes of ability!

          a) Be able to SCALE-UP and SCALE-DOWN available LOCALLY AVAILABLE processor speeds and threads.

          b) Be able to be SINGLE SYSTEM USE and MULTI-CORE, MULTI-CPU, MULTI-NETWORK NODE capable.

          This means for the IMPORTANT parts of your applications which almost ALWAYS include text or file search and data access speeds AND actual data processing (i.e. combine, reduce, render, add, subtract and otehr types of actual data manipulation) that can FIT WITHIN (stretch-out and shrink-in) to the available computing horsepower.

          You need to then make your IMPORTANT FUNCTIONS be able to decide HOW to operate depending upon the available horsepower and connectivity that is detected.

          Example:

          Procedure Process_My_Data( Var My_Data: My_Array_Type );

          Begin

          Try

          Case GLOBAL_HORSEPOWER_FLAG of

          LOW_HORSEPOWER_USAGE : Process_Using_Local_Core_Only;

          SMALL_NUMBER_OF_USERS : Use_Multi_Thread_System;

          MEDIUM_SIZE_GROUP : Use_Multiple_Cores;

          LOTS_OF_AVAILABLE_POWER : Use_Network_Node_Processing;

          TURN_KNOB_TO_11_HORSEPOWER : Give_It_Everything_You_Got_Captain;

          End;

          Except

          Show_Error_For_Cannot_Process_Data_At_This_Time;

          End;

          End;

          As noted above, ALL my important routines have PROCESSING HORSEPOWER FLAGS which specifically get used ONLY when the specified amount or range of processing ability is detected.

          So you NEED to talk to your end-users and DEFINE specific thresholds which form HARD AND FAST boundaries as to when, where and HOW your data is to be processing depending upon how much available processing power is available locally AND/OR globally!

          .

          The technical term for this is call "Systems Analysis":

          1) What do your customers and end-users NEED TO DO AT A MINIMUM in order to get their goods and/or services delivered quickly?

          2) How can we reduce the number of steps to get those goods and services delivered quickly?

          3) What EXTERNAL factors (i.e. government regulations and 3rd party intermediaries or suppliers) will cause the process to slow down and/or stop completely for that end goal of quick delivery?

          4) How can my application system INCLUDE those parties so I can speed up my final goal of quick delivery?

          5) What is the speediest and/or most cost-effective hardware system my application can run on which will HELP ME delivery my goods and services quickly?

          .

          Answer those five questions and your application will be written and deployed a heck of a lot quicker!

          .

        2. paulll Bronze badge

          Re: well done

          "OPTIMIZED CPU, GPU, DSP, MCU FPU, and yes even PIC (Programmable Interrupt Controller)"

          Er. I think you got the wrong wikipedia entry.

          1. HieronymusBloggs Silver badge

            Re: well done

            "Er. I think you got the wrong wikipedia entry."

            Or the wrong kind of mushrooms.

            1. paulll Bronze badge

              Re: well done

              High on something to be suggesting that he's written a compiler so clever it can target a simple state machine.

              1. StargateSg7 Bronze badge

                Re: well done

                That is EXACTLY what it does! Compile to well-thought-out Virtual Assembler Instructions that do COMMON tasks that have been part and parcel of almost ALL major processor chips since the 1970's. The key is to make a Simple State Machine that contains and represents instructions that are common to many modern CPU, GPU and DSP chips AND you get the compiler to LIMIT it's output to TYPES of instructions that are available on a given CLASS of processing chip. (i.e. Server or Desktop CPU, DSP, GPU or MCU, and even a PIC processor!)

                My compiler outputs up to 150 instructions which cover a wide variety of bit-wise, signed and unsigned integer, fixed point, floating point and binary codec decimal (BCD) manipulation and various boolean and character string handling tasks, in addition to basic RGBA, YCbCrA and HSLA pixel and bitmap handling tasks plus simplified TC/UDP/IP V4/V6 communications stack handling at various bit-width from 8-bit, 16-bit, 32-bit, 64-bits and 128-bits. That is ALL we need to worry about. Eevrything else can be done as higher level algorithms.

                From that SIMPLIFIED set of 150 Virtual CPU/GPU/DSP instructions, our engineers have devised and created highly optimized assembler code for over 100 general purpose CISC and RISC CPU's, low-end and high-end DSP chips, Various GPU chips, General Microcontroller Units (MCU's), Networking Chips and even PIC's (Programmable Interrupt Controllers)!

                My compiler at the final step merely MAPS TO and outputs those optimized assembler routines giving me SAFE, SECURE and reasonably FAST code for many chip types! Computers in-themselves ARE finite state machines which means I can emulate ANY SPECIFIC computer system on another!

                .

                I've been doing this since 2005! I think we've got the process down pat!

                AND YES! The parent company DOES in fact have a 128-bits wide general purpose GaAs combined CPU/GPU/DSP server-class processor that is DECADES PAST any AMD EPYC, IBM POWER-10 and Intel Xeon processor! Ours START at 60 GHz and go up to TWO TERAHERTZ for the DSP chips! This company IS BIG (i.e. lotsa money!)! It has unparalleled software, technical and hardware engineering talent, and has incredible manufacturing capacity AND it's been in business a very long time. I just happen to know one of the owners personally so I get a bit more leeway than others!

                .

                .

              2. Roland6 Silver badge

                Re: well done

                @paulll - High on something to be suggesting that he's written a compiler so clever it can target a simple state machine.

                I take it you didn't study Compiler Techniques at University... I suggest you read up on the UCSD P-code system.

                The way we implemented a C compiler back in the 1980's was to create an intermediate truple code from which I then wrote the various code generators (x86, 68k, etc.) the only part that required real understanding of the individual CPU's was the optimiser, where you had to really understand the specifics of individual CPU family instruction sets and the differences between say the 8086, 8088, and 80186.

                1. paulll Bronze badge

                  Re: well done

                  I take it you didn't study any courses other than,"Compiler Techniques."

                  I'm familiar enough with the p-machine and the concept of stack machines to know that they're of no relevance here. Stack machine is not the same thing as state machine, for starters.

                  OP did a websearch for a bunch of different ISAs that his fantastic compiler could target. One of them was PIC. His source no doubt meant,"Programmable Intelligent Computer," the vastly-popular microcontroller range. Plausible. But he had to go a step further, and his google-fu let him down, and the first techy-looking hit for PIC was Programmable Interrupt Controller. And there's the rub. Programmable Interrupt Controllers don't run p-code. They don't run p-code that has further been translated for them. They don't run machine code. They don't run *any* code, they're sequential logic arrays. And it's just not a mistake that anyone who knew what they were talking about would make. It's like saying,"Yeah my software's awesome, it runs on x86, 68k, ARM, z80, ay-3-8910 and 555." No, mate, it doesn't.

                  1. StargateSg7 Bronze badge

                    Re: well done

                    PIC's are mostly just counters.The one on my desk counts from ZERO to 255 continuously all day every day!

                    Sometimes that's all you need. AND YES my compiler does in fact output incrementing and decrementing opcodes which can turn your dryer on and off via said PIC! DigiKey keeps sending their postcards wanting me to order more of them....

        3. Fred Goldstein

          Re: well done

          Good sci-fi has plausibility. Generating intermediate code is quite plausible. 128-bit GaAs processors running at THz speeds is not. GaAs runs rather hot and extremely high speed semiconductors are not likely to use it. Millimeter wave radios today have migrated to SiGe.

          1. StargateSg7 Bronze badge

            Re: well done

            Why do you think the circuit line widths are 280 nm? GaAs uses ENORMOUS AMOUNTS of power at monolithic substrate sizes! Some of the DSP chips are 200mm+ across! YET they run at up to TWO terahertz and the more general-purpose CISC 128-bits wide processor is 60 GHz. I really couldn't care less what ya think whether it's Sci-Fi or not. The PROOF is in the PUDDING! THEY DO NOT HAVE 2 THz GaAs superchips! The parent company DOES!

          2. StargateSg7 Bronze badge

            Re: well done

            We're not creating MMC's (Monoloithic Microwave Circuits and/or COMM chips) on GaAs, we are doing EXACTLY what Cray did BACK in the day (1970's) with vector processors (i.e. basically SIMD-like array processing) except we are directly etching 280 nm Vector-orented circuits onto the substrate like any normal CMOS chip except we're adding Beryllium, Sulphur, etc to get our P and N-type structures.

            We use IBM licenced microchannel technology for cooling underneath and BETWEEN the line traces. We've done this before! The parent company has lots of eBeam etchers which can go down 7nm on CMOS circuits and at 280 nm for the GaAs so wicking heat away is not an issue in our technology. They run at 60 GHz and TWO THz JUST FINE --- Thank You Very Much!

            No Liquid Nitrogen cooling is needed as unlike Cray, we mere use much cheaper and longer lasting Silicone Oil for Immersing our motherboards and very large heat exchangers. Again, this company is WELL EXPERIENCED in high-speed circuit production. The chips go into aerospace craft very well and are actually MUCH EASIER to make than Rad-Hardened Silicon-on-Sapphire technology.

            .

            Good Try! If Steve Chen of Cray can find us he would be VERY IMPRESSED by our 128-bits wide Vector CPU/GPU/DSP work!

            .

            We know what we're doing.

            .

            1. Anonymous Coward
              Anonymous Coward

              Re: well done

              OK, then, if your company really CAN do everything you describe, why don't you NAME the company AND cite a number of its publicly-verifiable accomplishments...?

              1. paulll Bronze badge

                Re: well done

                Because if he gave the game away at this early stage, Robert Vaughn's character would cut him off.

        4. Tom 38 Silver badge

          Re: well done

          I (and ONLY ME!) did the ENTIRE FRONT-END compiler

          The only thing you did the entirety of is the crack pipe.

    3. martinusher Silver badge

      Its easy enough to make portable code for a single execution processor -- like you say, you just flip around the makefile macros and you can compile for anything you've got a code generator for. I think the problems appear when you try to exploit features of the overall processor subsystem (multiple cores, cache manipulation and so on). You can end up with exquisitely optimized code for a specific architecture. This may be what Linus is talking about.

      BTW -- I was under the impression that x86 processors were microcoded (from time to time the test/load instruction becomes exposed which is a very Big Deal -- its like the greatest trade secret inside Intel). Given this, I wonder what the actual processor architecture looks like?

    4. Anonymous Coward
      Anonymous Coward

      Needz moar CAPZ!

    5. eldakka Silver badge

      Sounds like Java to me.

    6. Androgynous Cupboard Silver badge

      Funny. I spent the last 30 years getting some work done instead.

  10. Christian Berger Silver badge

    Well currently the problem with ARM is not the CPU

    The problem is that every SoC is completely different. Therefore you cannot have one image for (nearly) all ARM-servers. To contrast this on the PC-platform everything is standardized well enough so you can just install any OS on (nearly) any server and it'll run.

    1. Richard 12 Silver badge

      Re: Well currently the problem with ARM is not the CPU

      This, a hundred times this.

      Right now, every SoC needs a customised bootloader and BSP, which means a huge amount of work before you can even start up a kernel and do your work.

      And worse, it means there's no standard way for an application to detect which features a given machine has!

      1. Anonymous Coward
        Anonymous Coward

        Re: Well currently the problem with ARM is not the CPU

        There are ways round this, like Yocto Project.

        I've got code that runs on various x86 and ARM platforms and I've not had to mess with any bootloaders with the BSP side being managed by board vendors.

        A bit of configuration file work to adapt the kernel to the platforms, but that's more to do with removing the virtually infinite number of X86 drivers that aren't needed on the x86 platform in order to reduce the image size and boot times.

      2. Will Godfrey Silver badge
        Unhappy

        Re: Well currently the problem with ARM is not the CPU

        Absolutely right. It's a total nightmare at the moment, to the point there is no such thing as an ARM processor, just lots of diverse, hardly compatible systems with some variant of ARM core.

    2. Charles 9 Silver badge

      Re: Well currently the problem with ARM is not the CPU

      ARM recognizes this, but any progress towards a standardized hardware specification is still in early days. You're right that most ARM systems are built as SoCs with hardwired memory maps (often kept under wraps as Trade Secrets--that market is also extremely competitive), and that will have to be addressed. Furthermore, to be able to handle say a PCI Express bus, ARM CPUs will need higher throughput than they're likely used to handling. Otherwise, something like a bank of M.2 SSDs is likely going to swamp them.

      1. druck Silver badge

        Re: Well currently the problem with ARM is not the CPU

        What do you think the controllers on many of those M.2 SSD's are?

        1. Charles 9 Silver badge

          Re: Well currently the problem with ARM is not the CPU

          Each handles just ONE of them, though. Why isn't one used to control the bus that has to deal with ALL of them at once?

        2. Anonymous Coward
          Anonymous Coward

          Re: Well currently the problem with ARM is not the CPU

          Multiple, specialized cores that can fully focus on that specific tasks given to them, no spurious commands and interrupts to handle. Heck, maybe they just use a specialized memory bus. That stuff doesn't count I'm afraid.

    3. steelpillow Silver badge

      Re: Well currently the problem with ARM is not the CPU

      The situation has in theory improved, with much standardization effort brewing among ARM system designers. But of course technology advances invariably carry the goal posts with them.

      The problem reflects in the poor availability of ARM workstations. They do exist but their OS are usually crippled by cash-strapped lack of polish - be it ARM Linux or native RISC OS - so few professionals want to know. And you have to know where to look. Although, given the continuing Microsoft work on porting Windows to ARM, that could soon change.

      If you are a frustrated ARM developer out there, my suggestion to you would be to take time out from unfriendly penguinistas who bite your fingers and see whether the newly open-sourced RISC OS has an appetite for your particular herring.

      1. Anonymous Coward
        Anonymous Coward

        Re: Well currently the problem with ARM is not the CPU

        see whether the newly open-sourced RISC OS has an appetite for your particular herring.

        That's not a herring, it's a King Salmon, thank you very much...

    4. Phil Endecott Silver badge

      Re: Well currently the problem with ARM is not the CPU

      > every SoC is completely different

      This is much less true of the “server” ARM processors than the SoCs that you find on “pi”-type boards. The “server” systems generally are self-describing via PCI descriptors and ACPI, like x86, and come with UEFI, so you should be able to boot then all from the same OS installation media.

      Of course there are plenty of rough edges. But if your experience is primarily with “pi”-like boards with chips repurposed from tablets etc., then be aware that the “arm server” space is better organised.

      1. overunder

        I think it's worse, SoC or not.

        "...then be aware that the “arm server” space is better organised."

        Doesn't matter and it kinda makes things worse that the server generation is fragmenting before a true tight standard is finalized, I mean what does that say about the future of that generation? At the very least, it reinforces what Torvalds stated, which reads as a statement for having some standard pushed by ARM themselves (which ARM doesn't have) and _not_ some 3rd party.

        Nobody right now can go out and buy an ARM device and share sure fire identical results across multiple ARM based devices, SoC or not. They all are 3rd parties which appear to be competing by lock out, and maybe rightfully because they have to stitch together everything else. Maybe if ARM is to survive they need to develop more than just a CPU. Maybe they need to officially adopt standards or create new ones beyond the realm of a CPU (no easy task).

        1. Charles 9 Silver badge

          Re: I think it's worse, SoC or not.

          You may wish to read up on things like the Server Base System Architecture and the Server Base Boot Requirements, both of which are being pushed by ARM themselves. But like I said earlier, while they exist, they are fledgling technologies that are still trying to build up the necessary momentum.

        2. Phil Endecott Silver badge

          Re: I think it's worse, SoC or not.

          > [standards] which ARM doesn't have

          But they do! (Huh?)

          1. jelabarre59 Silver badge

            Re: I think it's worse, SoC or not.

            [standards] which ARM doesn't have

            But they do! (Huh?)

            Heh, Standards??

    5. ATeal

      Re: Well currently the problem with ARM is not the CPU

      Yeah I was looking for this. I want to like Arm but the problem is there are just so many versions of everything with "hidden" or actually hidden features abound. At least on x86-64 we have (never thought I'd say this) http://www.acpi.info/DOWNLOADS/ACPIspec50.pdf <-- ACPI! Yes there are a few things I'd tweak (so hardware developers didn't fuck them up so often) but at the time it was made and given inexperience you can forgive it for being what it is (and any big changes now would just make supporting it worse)

      We also have the CPUID instruction which while also really complicated and tedious to use, requiring a manual, sticky notes, a pencil and a rubber (because although the legend says the other side of the rubber can erase pen, this is a legend with no basis in fact) making notes about where what was mentioned.

      BUT it is there!

      It's also been designed (AMD 64 fixed a lot, they basically said "okay we're doing this new 64 thing, it's very much like what we had but if you did a find and replace for 32 to 64 and renamed e.x to r.x (I'm simplifying but you get what I mean) and we mandate that you at least have SSE2 and this bunch other bunch of stuff" - that made it easier for all sides. New features (like AVX for example) have to be turned on by a kernel aware of them for userspace to be able to access them, so any aware kernel knows to save the registers as needed (and that they're bigger now) - stuff like that.

      I want to use ARM stuff, I really want to start looking at parallel systems where they're not quite as incestuous as current systems (and have way more cores, by an order of magnitude really) - by that I mean often there's an L3 cache that sits under all the cores and they're far from independent really and write some stuff about ideas I had from learning to use the Cell chip in the PS3 (something else you couldn't use anywhere else!) - I'd love to actually use it for work!

      People here have brought up the Raspberry Pi. I got a B I think it was, It had no DMA controller so reads from the SD card took forever (you could see the thing spent most of its time waiting for data) half of the exposed CPU features didn't work or couldn't be accessed. The GPU drivers were extremely bad and buggy (yes I was using the legit one) and it was basically unusable as a computer. I hate software bloat (I remember Excel 97 on an 800mhz P3 really fondly, the speed, so little memory (compared to now) - and I work hard to buck the trend here - rant for another time) as much as the next guy, but I don't think it was just that.

      objdump works great on x86-64, if I don't know an instruction (pah!) you can copy and paste it and find a link to one of Intel's biblically sized volumes on the matter (or if you don't want to self harm or in my case be driven to the harm of others) find some other reference. FelixCourtier(?) pops up a lot now, yes it's mechanically generated but it's handy.

      I have no bloody idea how many or which arm instruction set I'm using. They have this really opaque marketing number system and that doesn't work "wait I thought 11 was better than 9" - "oh the Cortex's are crap when the number is a single digit" - "oh except when it's big.LITTLE because that's really a double digit one with a single digit one thrown in" - "BUT a single digit one with NEON THUMB is good right?"

      Anyone who got used to perf's event counters prepare to be sent to the dark ages of when debuggers had to modify the binary to set breakpoints and step! SORT OF SOMETIMES MAYBE

      They also give out very little information on the chip's insides (if you can work out WTF it actually is), where as both Intel and AMD publish schematics - for example Agner Fog's guides and Wikichip.org (put in SandyBridge (and go through the well documented architectures from then until now), get a sexy textwall) and you can reason about what kind of throughput you can get, why it's not giving you what you thought, ect - a large part of my job is squeezing everything I can out of SandyBridge and Skylake series chips.

      Anyway I digress....

      ARM keep their cards close, then burn them at the first opportunity. then eat the ash and keep their shit in a vault.

      It's like trying to get the ring from Smeagal, "Trixy consumer, it's ours the precious, you can't know about [NEON, THUMB, design, the one where it executes JVM bitecodes directly, information about buses, timing of instructions, ...]"

      Which is weird as I love reading old ARM documentation at bitsavers.org (It was before my time but they're very detailed).

      1. Phil Endecott Silver badge

        Re: Well currently the problem with ARM is not the CPU

        > They have this really opaque marketing number system and that doesn't work

        You prefer “Something Lake”?

        1. Anonymous Coward
          Anonymous Coward

          Re: You prefer “Something Lake”?

          IA64 was always going to be "Something Late". Even in the server market.

        2. ATeal

          Re: Well currently the problem with ARM is not the CPU

          I get that familiarity helps, but there are so many distinct versions sometimes under the same group name, it's difficult.

          As for intel's, it's a word, so it is memorable. And the second word is a series, so SandyBridge and Ivy Bridge are both from the "Bridge" series. There's been Bridge, Well and Lake - not too bad! Order is a bit harder I give you (I often get Haswell and Broadwell mixed up) but I know that SandyBridge is (until they change it again) 2xxx, Ivy, 3xxx, then there's 4xxx ... Skylake 6xxxx .

          There's plenty of documentation for these and they're not that different for all the things with the same name.

      2. Phil Endecott Silver badge

        Re: Well currently the problem with ARM is not the CPU

        > They [ARM] also give out very little information on the chip's insides

        > (if you can work out WTF it actually is), where as both Intel and AMD

        > publish schematics

        Ha ha ha ha - Intel publishing schematics of the inside of their chips??? What palnet are you on!!!!!!

        1. ATeal

          Re: Well currently the problem with ARM is not the CPU

          There are block diagrams openly available, check out the wikichip.org thing I mentioned. You can reason about the architecture quite easily - see Agner Fog's guides I also mentioned. You can get the most out of it pretty easily.

          Yeah there are no masks available but the "high level" of the cache, its coherency mechanisms (the protocol it uses), how it handles read-after-write (if you read a 2 byte bit of a 16 byte value just written, it can satisfy the request from the write it has yet to do, on older arches it dependence on alignment, as of Neelhalm, it can do all for 8 byte write and 16 byte write, but has an extra cycle penalty for reads straddling the 8 byte boundary)

          It's all out there. Compiler writers use this too. I don't see what the issue is.

          Frankly I find it absurd that you thought I meant that the *actual* schematics for their product line.

          1. Martin an gof Silver badge

            Re: Well currently the problem with ARM is not the CPU

            The thing I'm finding most fascinating about this thread is that Phil Endecott hasn't bitten anyone's head off yet :-)

            It might pay some here to go and find out exactly who he is before opining too strongly on what ARM does and doesn't do...

            M.

            1. ATeal

              Re: Well currently the problem with ARM is not the CPU

              The problem though is that it's not a question of whether such an ARM chip exists; at least not for me, but you epic prototypers out there - you guys rule!

              It's whether or not it is crap and can be purchased reasonably or not (eg £499 for an equiv Raspberry Pi (pick your edition) is not something I'd pay). And by crap I mean "has DMA controller" because seeing it spend almost all of its time waiting for IO killed my enthusiasm for it.

          2. Down not across Silver badge

            Re: Well currently the problem with ARM is not the CPU

            Frankly I find it absurd that you thought I meant that the *actual* schematics for their product line.

            Frankly I find it absurd that you that you don't call things by their proper names. How are we supposed to know what you mean if you don't?

    6. paulll Bronze badge

      Re: Well currently the problem with ARM is not the CPU

      For now that's a big limiting factor. But I've been thinking for quite a while, with all the consolidation that's going on in the semiconductor business, *and* with Intel dropping the ball repeatedly *and* with MS already deploying stuff to ARM; It seems a matter of time before somebody (NXP/Freescale batting its eyelashes at Qualcomm springs to mind) with a lot of experience and dough, starts shifting an ARM implementation that comes to the fore in the general purpose CPU space, becoming a standard in-of-itself and potentially eclipsing x86.

    7. Gordan

      Re: Well currently the problem with ARM is not the CPU

      "The problem is that every SoC is completely different. Therefore you cannot have one image for (nearly) all ARM-servers. To contrast this on the PC-platform everything is standardized well enough so you can just install any OS on (nearly) any server and it'll run."

      Not true any more. UEFI has come to ARM to bring the much needed standardisation. I have a CentOS 7 install DVD (with a custom mainline kernel to support more SoCs than the distro kernel) that will boot the installer on just about any UEFI aarch64 machine, including Gigabyte MP30-AR1 and Raspberry Pi 3 (you have to make it boot the UEFI boot loader from the SD card, but that's no big deal).

      Standards have in fact come to ARM. It happened a couple of years ago.

    8. Anonymous Coward
      Anonymous Coward

      Re: Well currently the problem with ARM is not the CPU

      This happens to any technology that is successful. The trick is to standardize and possibly modularize the additions to the platform. Having those kind of advances overseen by ARM would make a lot of sense. It should be done relatively early, or you will face competition between standards setup by others. Doing it too early will let you make mistakes and the standardization will fail. Tricky stuff; a few remarks here shows that they may be a bit late, so some standards will have to fight it out between themselves.

  11. milliganp

    Apple PC

    Did Linus miss the Intel announcement that Apple will move their PCs to ARM 'by 2020'.

    Yes, ARM in the data center has multiple false starts but there are now companies with 10's of billions of dollars investing in ARM everywhere.

    Delays in Intel's migration to 10nm and 7nm have cut much of it's lead and shown it is no longer invincible. It would be a fool who would say x86 days are numbered but nothing in the history of technology lasts forever.

    1. joeldillon

      Re: Apple PC

      I think we all did, mate. Please link to the announcement on Intel's website.

      Alternatively, rumours aren't the same as facts.

      1. doublelayer Silver badge

        Re: Apple PC

        This doesn't solve his complaint. If the tradition of ARM continues, you won't be able to boot anything else on these ARM-based macs, and you won't be able to use ARM Mac OS on a server. Depending on whether Apple really hates multi-boot, you might see a port of Linux to the ARM mac. However, that won't happen immediately; it will take quite a bit of development to get right, and even then your code is running on a slightly different version of the ARM core.

        I don't see this as a problem, but I also don't see writing on x86 and running on ARM as a problem either. His complaint is that you can't run exactly the same system on the development machine as the production machine because the closest thing we have to an ARM desktop is a raspberry pi. If he is right that this is an important factor, an ARM apple machine does not change it.

    2. IanMoore33

      Re: Apple PC

      define Apple PC .. you mean MAC ?

    3. Phil O'Sophical Silver badge

      Re: Apple PC

      Apple will move their PCs to ARM 'by 2020'.

      Which, when you think about it, is less than 11 months away.

  12. Compuserve User
    Linux

    ARMed and Dangerous

    I am sure Mr. Linux knows what he is talking about. It is just the many skeletons in the closet with regard to ARM that are more frequent and scary than in x86. The constant security issues in ARM is bewildering. It is better summed up as "Better the devil you know than the devil you don't." I could not trust an enterprise ARM server for anything really, except maybe home lab use...

    1. Anonymous Coward
      Anonymous Coward

      Re: ARMed and Dangerous

      As compared to x86 (Meltdown et al)? Explain.

      1. Anonymous Coward
        Anonymous Coward

        Re: ARMed and Dangerous

        x86-64 has far more exciting stuff than just Meltdown. E/g/ the decade-long denials that the x86 "management engine" was a security nightmare have finally been revealed as Intel HQ BS.

        Rather more recently, well-informed people are realising that whilst Arm SoCs largely still have TrustZone, Intel's allegedly-secure SGX is also falling apart at the seams:

        https://www.wired.com/story/foreshadow-intel-secure-enclave-vulnerability/

        https://www.theregister.co.uk/2018/08/15/foreshadow_sgx_software_attestations_collateral_damage/

        https://www.theregister.co.uk/2019/02/12/intel_sgx_hacked/

        But Intel still have piles of cash, and that still counts for something.

      2. eldakka Silver badge

        Re: ARMed and Dangerous

        Meltdown and Spectre aren't x86 issues.

        They are IC engineering issues whre chip designers try and wring out extra performance that is instruction-set agnostic. Meltdown is Intel specific, it doesn't affect AMD x86 CPUs nor most other CPU manufacturers.

        Spectre (and its variants) are also IC engineering issues, not instruction set issues. Spectre variants affected most x86 platform (some Atom were not effected) of Intel and AMD, ARM, Power and SPARC.

  13. steelpillow Silver badge

    Wrong way round

    It's not the lack of workstations holding back the development of severs, it's the lack of servers holding back the development of workstations. Why bother, if there are **** all servers to code for? Once the servers are out there, the workstations will follow.

    1. Anonymous Coward
      Anonymous Coward

      Re: Wrong way round

      The problem with eggs is the lack of chickens to lay them.

    2. teknopaul Bronze badge

      Re: Wrong way round

      I still dont grok why any company would invest significant money porting server hardware to Arm and all their developer laptops to Arm?

      Thats expensive in hardware.

      Probably a lot of software work even if you can "just recompile". There will be lots of code that needs rewriting/upgrading.

      Significant work package managing for arm servers since its not 100% ready yet for most distros.

      Years of supporting two architectures in the transition.

      Some stuff will probably never get ported.

      What is the money saving? Is it just electricity in the datacenter?

      Anyone got any numbers?

      1. dajames Silver badge

        Re: Wrong way round

        What is the money saving? Is it just electricity in the datacenter?

        What do you mean "just"?

      2. ATeal

        Re: Wrong way round

        As a very informal rule of thumb (but as with all rules of thumb, useful - I only put this here to start a pissing contest with some of the tools here :P)

        x86-64 has around a "40% crap tax" - that is you pay 40% of some measure to account for specific things. For example 40% extra power for the nasty decoding problems, 40% lower throughput if you skimp on those (which is why the atoms sucked so bad. They still needed active cooling and were easily out-performed by a 2012 phone, but that's a long time ago I don't remember when I got the netbook and joined the revolution of "so much battery life, so portable - tiny keyboard is unusable, and it's unusably crap")

        In that trade-off is 40% more power usage.

        I stress it's a rule of thumb, SandyBridge (although the idea existed before as a trace cache) and possibly Neelhalm (the one before) use an "LSD" - loop stream detector (it was botched for Skylake, discovered by the Haskell people (naturally) and de-activated in an patch) the idea is that this stores loops entirely that are small enough, allowing you to switch off the decoders (a huge saving!) so tight loops just run near perfectly. That sounds pretty weird right? If the decoders could keep it fed why bother? Power savings.

        Lastly, why this tax?

        x86-64 instructions can be up to 15 bytes in length, and as short as 1 byte. So if I give you some quantity of bytes you cannot mark out where instructions begin and end without decoding at least their lengths, but the use of prefix bytes and all kinds of other crap means that this "pre-decoding" if you will is basically a full decode. Once you've done this step the register masks are there to be syphoned off and stuff, it's a really really big penalty.

        RISC architectures traditionally (kinda have to have to be RISC to be honest) have nice uniform instruction lengths, eg all 4 bytes. VLIW and EPIC are sub-types of RISC and have longer (like 32 bytes is the lower end VLIW) - and alignment requirements, that is the address of the first byte of the instruction must be divisible by say 4 in this example

        So given any chunk of bytes, I can say "if an instruction is here it starts where the address ends in 00" - next is 8 bytes, that'd end in "000" - job done.

        I *believe* but it's not my area (see above) that ARM chips can switch modes. they have a short 2 byte instruction form that covers loads of common cases - the chip must be switched between modes - another I've heard of requires an instruction to be 4 bytes, either 2x 2-byte-short instructions or one full size 4 byte (I've heard of something similar with 8 byte instructions accepting 2x 4 bytes for common cases)

        But you get the gist, very easy. For x86(-64) this affects everything, branching for example, "where's the start of the next instruction" - nope you can't just add 4. This needs to be known for branch histories too. Decoding is also an absolute nightmare. This is why RISC emulators run reasonably well (for pure emulation now) compared to emulating x86-64 (yes brute force lets us run some stuff like this practically), but x86-64 is way way way more difficult.

        What happened to RISC you might ask? If it's so good right? Well there were like 4 or 5 RISC arches and suddenly there were zero, they all thought Itainium would be a good idea (enjoy looking that up) - no one mentioned "but hey guys, doesn't that force an NP-hard problem onto the compiler?" and "doesn't that you can't be sure code you wrote ages ago will work even reasonably well on later versions because of architecture changes?"

        But anyway.... It was a bit before my time, but only PowerPC was left standing sort of.

        Itanium was supposed to be Intel's 64 thing, that's why it's called IA64 (Intel Arch 64) IA32 is x86, and AMD64 is what we call "x86-64" sometimes because AMD realised "hey backwards compatibility FTW!"

        Now you asked about power, speaking purely of the CPU and not the Larabee derived Xeon Phi accelerators (They're like .... gimped/Atom-esque cores with AVX-512 bolted onto them, crap CPUs but decent vectorisers, this sits in a rare niche where a GPU is even today too not-general-purpose to do it, so it needs the CPU parts)

        40% savings to power - or you could have 40% extra of the "uncore" (weirdly this means "the core of the core" kinda thing) transistors to use for not paying the tax, you get the idea. This 40% transistors doesn't include the cache BTW - purely "uncore".

        That's a big deal.

        Furthermore the time of "wait 6 months, then it'll be faster" (hardware would get faster) is long over. We're now deep into the "scaling out" side of things and some algorithms are probabilistic (bad term on my part, NOT "probabilistic algorithms" - something else - see next paragraph) , yes there's a lot of work not geared for this, but there's a lot of this work too! 40% is nearly half, you could run another core almost with that.

        For "algorithms that are probabilistic" I meant for example a certain search engine beginning with "G" (at least, I imagine it's common. It's easy) actually sends out 3 of every web-query it gets, and shows the results from the first one to come back and ignores the rest. This hugely cuts down on latency,

        It's a hell of a saving and as I whined about above, I've wanted to see it for a long time.

  14. Paul

    One problem is that at the low end we have R-Pis and competitors, all sub $100 for the boards. At the higher end of that we have Rockchip RK3399 boards which tend to be in the $80 to $120 range depending on RAM, eMMC and wifi capabilities. All of these are built on older Arm cores - the RK3399 has A72 cores which are a good few years old now!

    There's then a big jump to get boards with higher spec cores, and most of those for "professional" users as "board support packages" for businesses developing phones or tables, and cost many many hundreds of $$, far from affordable. You can look on 96boards and you'll find the Kirin 970 (Arm A73) at roughly $300. I couldn't find anything there with newer Cortex cores.

    Then you have the problem that Arm don't have good linux support for their GPUs, usually a binary blob, and little or no 3D acceleration. AFAICT people end up using kernels and drivers from Android builds and then bodging a linux desktop on top of that.

    Gigabyte have a cavium thunderx workstation, but for that price you can buy a pretty decent Intel laptop! The Socionext dev/workstation is over $1000.

    So, really, it seems to me that Arm don't care about anything other than Android or small embedded devices, maybe they care a bit about Windows (with the new replacement of Windows-RT) but I wonder who's doing all the work on the GPU side to make Windows run on it? If Arm cared, they would be actively supporting native development on Arm-based workstations.

    Apple are really doing their own thing, their processors closely resemble Arm processors when seen as a black box, but AFAICT their bionic processors (which are really good!) are completely custom design. I hope that they do release affordable devices like Mac Minis with Bionic processors, and they don't lock them down, so they can be re-purposed for other operating systems!

  15. Doctor Syntax Silver badge

    Price is the determining factor in the long run. Back in the hey-day of VAX, HP, Sparc etc. we were likely to develop on the same architecture as the application ran on. We might even have developed on the deployment machine; a separate development box would have been a luxury not many could afford for in-house development. It was cheaper hardware that enabled the move to Intel and availability of server OSs that lead to the migration from those platforms. Xenix and SCO enabled Unix applications to move onto Intel; in fact, if SCO hadn't have attempted to follow a high price model I doubt Linux would have come to dominate the Intel Unix-like market. There were plenty of small businesses in the '90s running on SCO and dumb terminals. There was a sub-industry producing multiple serial boards to plug into PC motherboards just to support it. There were also bigger businesses running on multiple Intel processor boxes with Unix OSs such as Dynix.

    The move to individual workstations for developers is a separate matter. The PC made it economical for developers to have their own workstations. For someone like Linus that made it possible to develop outside a corporate environment. For traditional architectures as shared server sufficed for many of us working on server/dumb terminal-based applications. I suspect Netware might have prompted the adoption of individual workstations in corporate development. My limited encounter of that left me with an impression that development on a server, especially one that was also running production. would have been too fraught. A separate workstation would have been needed for graphical applications, even for Unix, as the X-terminal really never caught on. This, of course, is where Windows came in and changed things.

    As to the future ISTM that the determining factors over the next few years will be the ability to mitigate the likes of Meltdown in Intel/AMD architecture* in the next generation of products and the adoption of ARM in workstations in a configuration consistent with those of servers.

    * An interesting aspect is whether Intel and AMD diverge in the process.

    1. Roo
      Windows

      The cost of broken x86 is already significant and rising rapidly...

      "As to the future ISTM that the determining factors over the next few years will be the ability to mitigate the likes of Meltdown in Intel/AMD architecture* in the next generation of products and the adoption of ARM in workstations in a configuration consistent with those of servers."

      I think the deciding factor will be chip errata. Case in point Xeon errata have been consuming a serious amount of man-power, money, and lost production. It's easy to point the finger at the validation processes, but I think the actual root cause is the ISA. It's too big, too complex, and in many areas too poorly defined. It is a money pit for validation and remediation.

      Cleaner ISAs with cheaper more reliable hardware will tell in the end.

      YMMV :)

      1. Doctor Syntax Silver badge

        Re: The cost of broken x86 is already significant and rising rapidly...

        "Cleaner ISAs with cheaper more reliable hardware will tell in the end."

        I agree. What I had n mind is whether Intel/AMD can survive attempts to mitigate without losing performance and whether, in the end, they have to get replaced with something else.

        1. Roo
          Windows

          Re: The cost of broken x86 is already significant and rising rapidly...

          What's interesting about whether Intel/AMD can adapt is that they've already done it once with x86-64, but in doing so they made a lot more cruft. They might be able to offer a stripped back more efficient better validated DC focussed x86-64 variant, but it'll be tough for them to market it. Case in point Xeon Phi's long twisty genesis.

          1. Charles 9 Silver badge

            Re: The cost of broken x86 is already significant and rising rapidly...

            It's the same problem Microsoft has: having to support backward compatibility. How do you clean things up without getting complaints from people who actually use the cut stuff for their mission-critical stuff? Turn them away and you can end up with defections and bad word of mouth which can domino.

            1. jelabarre59 Silver badge

              Re: The cost of broken x86 is already significant and rising rapidly...

              It's the same problem Microsoft has: having to support backward compatibility. How do you clean things up without getting complaints from people who actually use the cut stuff for their mission-critical stuff?

              Simple enough from a technical standpoint, not so easy from an administrative/licensing one. MS would just need to turn their older legacy code to the Wine project, and then introduce a Wine layer in the MSWin server to use that for older code. Would free them up to remove that from the core.

              As i say, simple from a technical standpoint. Getting such a thing approved would be near impossible unless they found it their only option remaining.

  16. joeldillon

    Hmm

    Given the number of web/backend developer types I've seen who develop on Macs and then deploy to Linux servers, I'm not sure Mr. Torvalds is in the right here. Different operating systems seems a much bigger stretch than different ISAs.

    1. TechnicalBen Silver badge

      Re: Hmm

      Perhaps he is just hoping/expecting Open SPARC/RISC projects explode and replace current ARM architectures? But that would be a long way off, and not really a reason to cut back on ARM development *now*. :/

    2. stiine Bronze badge

      Re: Hmm

      You must not have to support them.

      I do, and things are different enough without bringing up version differences.

  17. devTrail

    Phrases out of context

    It seems that they took some phrases and put them out of context to fuel the contest x86 vs ARM.

    But whenever the target is powerful enough to support native development, there's a huge pressure to do it that way, because the cross-development model is so relatively painful.

    Linus Torvalds is fully aware that native development is a small part of all the development that is going on and the phrase has a limited scope. If you develop on a framework and the framework has been tested on different platforms unless you are shaving the milliseconds to get the highest performance you just deploy on the server that you find available. If you really see a difference in performance or stability you have to ask explicitly to your boss who will also take the budget into account so you have to justify your request.

    I think that the x86 vs ARM contest in the near future will somewhat shadow other alternatives like RISCV, PowerPC and so on.

  18. karlkarl Bronze badge

    This is good. This idea might actually push for the development of a proper ARM workstation rather than hobbiest / consumer stuff.

    If we can just break through the idea of having a unique image per arm device and develop a standard "BIOS" for arm, then we might finally get away from Intel.

    However, all this will only happen from the "open" developer market first, servers will follow.

    1. -tim
      Boffin

      The way the standard BIOS was done was a huge mistake for the x86 systems. Microware OS9 (for the 6809/68000) used to have device modules. They were small files that were tweaked for the machine that could be on disk on in ROM. A serial driver device module would say something like "use chip driver mc68681.drv, interrupt 4 and memory i/o of 0x80008". There was a second name module that gave names to com1 and com2 for the device module. The chip driver would be loaded off disk and the ROM based name module or device module could be replaced from one on the disk. The main processing loop of the OS knew how to share interrupts as the modules contained enough info to figure out which chip caused the interrupt. It could reload an reinitialize modules. Add a few fields for PCI style IDs and device UUIDs and it could be used on all modern hardware.

      1. jelabarre59 Silver badge

        Microware OS9 (for the 6809/68000) used to have device modules. They were small files that were tweaked for the machine that could be on disk on in ROM. A serial driver device module would say something like "use chip driver mc68681.drv, interrupt 4 and memory i/o of 0x80008".

        That's how I expect Microchannel could have evolved had IBM been smarter on the licensing front. Those "reference diskettes" would have eventually evolved into EPROMs on the adapter cards, with the device configuration and binaries for drivers on the device itself. At this point it's to late to implement such a thing even in PCIe.

        1. Peter Gathercole Silver badge

          @jelabarre59

          It is easy to forget that Microchannel was not just an x86 technology.

          Both RISC System/6000 (early Power platforms) and AS/400 used Microchannel, and neither of these needed reference disks. I can't get away without mentioning the baby mainframe 9370 as well, but it used a PS/2 model 80 as it's I/O controller, so does not really count in this context.

          In many ways, the way Microchannel worked a lot like PCI. Each card had a baked in ID string that was readable during the configuration stage of booting, to allow the OS to configure the support.

          RS/6000 and AS/400 systems did have an advantage, though, because both OS's were controlled by IBM, so in much the same way that Apple has now, IBM controlled both the hardware and software layers of the systems. Before using a new card, you had to upgrade or patch the OS to include support for the new card, which provided the configuration method for the ID string for the card.

          For PS/2 systems, the reference disk included an ADF (Adapter Description File) which was a text file with a description of the adapter, which actually sounds a lot like what the OS9 text description file that the previous poster referred to. I think that the BIOS in PS/2 systems would load the information for the installed cards from the reference diskette to set up the IRQs and memory, and store that in the NVRam, so that the BIOS would do the basic device configuration before the OS bootstrap.

    2. dajames Silver badge

      If we can just break through the idea of having a unique image per arm device and develop a standard "BIOS" for arm, then we might finally get away from Intel.

      We have UEFI, which is supposed to be a 'standard "BIOS"' for everything. One of the big drivers behind it was (ironically enough) that Intel wanted a single "BIOS" (and single adapter board ROM images using interpreted code) for both x86 and Itanium, and it can certainly support ARM as well.

      UEFI is not known for being clean or simple, and it's certainly not everyone's cup of tea, but it does exist and it is increasingly widely used. It's an off-the-shelf solution that can, in principle, support any processor family.

      ... so let's not pretend that there's no applicable solution for ARM.

      1. stiine Bronze badge

        Won't Microsoft have agree to sign any solution?

  19. Giles Jones Gold badge

    Depends on the application, you won't find many people developing online applications in c++. Most are Java, .Net, Scala etc. All compiled but interpreted by a VM.

  20. cdegroot
    Linux

    Torvalds is wrong. Film at 11

    Sometimes I think he's a bit of an idiot savant - he's an ok-ish coder (I was around a lot in the early kernel sources, nothing to write home about) and he seems to be doing a decent job of corralling lots of developers into building something that, frankly, should not have been built (I'm with Tanenbaum here ;-)). I get it though that El Reg needs to run an article every time he opens up his pie hole.

    Anyway, I think he's wrong. I'm a "cloud" developer and for the large part it's a "don't care about the processor" kind of business. Cloudy stuff happens in Java, Ruby, Python, Golang, ..., and in our case Elixir - modulo the odd native dependency which I all find compile fine for ARM. Things just run on both platforms. If ARM servers can save money at scale, I would be more than happy to spend the one or two days to figure out cross-compilation for our build systems; economics dictate it's a good investment.

    I think that that's pretty much the biggest uncertainty - can ARM deliver more transactions per dollar than Intel? Everything else follows from there. I'm doubtful, to be honest, even though I'd love to see some competition and ARM and RISCV are pretty much the only game out there next to x86.

  21. Paul Hovnanian Silver badge

    He's right, but ...

    ... what put Intel into the catbird seat may knock it back out.

    As Torvalds points out, Intel won the server space because development platforms supporting it became much cheaper than the big name platforms at the time. And it's what gave Linux a leg up over NT in this space as well. x86 platforms being equal, someone just starting out on a shoestring could slap together a GNU tool chain for a lot cheaper than the requisite Microsoft licenses were going to cost.

    But now, Intel is having trouble in the home/small office market. The preferred platform is moving off of beige boxes and over to laptops and tablets. With batteries. Something that Intel continues to have troubles with. ARM is moving up from the bottom, so to speak. It is the go-to platform for SoCs, embedded apps, phones and tablets that aren't manacled to a power cord. And it's an easy sell for clients where the consumer doesn't give a damn about processor technology. If ARM manages to wiggle its way into more client platforms, it will become the development platform for server space.

    Once ARM gets into server racks, the power issue may raise its head again. It Intel can't get its act together on power consumption, the lower power cost of the ARM platform, while something a consumer may not consider, will be a part of the data center's TCO.

    I'd go on about this, but my ThinkPad's battery icon is winking at me and I've got to find a plug .....

  22. emullinsabq
    Linux

    server market...

    The reason I don't care about cloudy things is my ARMv5 (seagate dockstar-- kirkwood) home server. I've had it almost 10 years running exim4 dovecot apache2 inadyn tftpd samba cups. It's been solid as a rock despite only having 128MB ram. It draws 3 watts (8 watts because of powering some USB devices) and just chuggs away. A year ago I tried to add pi-hole, which was just too much for the 128MB, so I moved pi-hole to an actual pi. Best $17 I ever spent, and I have a spare in case it ever actually goes kaput.

  23. bobajob12

    Misses the point?

    I think The Great Torvaldo is making one good point and missing the larger one. It is a scary thought to not be writing on the same platform that you are deploying on. Absolutely right. But it does not follow that you should stay on one, nor that choosing not to is futile.

    For one, we can all tell stories of how Production turned out not to be like Dev because the latter had some dll or package that the former didn't. So you already have to have very strict processes for avoiding these types of snafus, and once you have those, you've broken the major objection to cross platform dev.

    Secondly, the implication that building on x86 is the only way to establish quality is demonstrably false, when you think about it, there are many, many more non-x86 (eg arm) devices out in the field than x86, and they work well. Phones, routers, etc etc. So clearly the cross-building folks know what they are doing.

    I do agree with ARM that having some rocket fast reference hardware would be a Good Thing though. One of the sad changes in Linux over the years has been the gradual dying off of non x86 Linux distros. It would be good to have some decent hw at a lower price point than x86 to resurrect that.

  24. CheesyTheClown

    There has been progress

    I do almost all my ARM development on Raspberry Pi. This is a bit of a disaster.

    First of all, the Pi 3B+ is no a reliable development platform. I’ve tried Banana and others as well. But only Raspberry as a maintained Linux distro.

    The Linux vendors (especially Redhat) refuse to support ARM for development on any widely available SBC. This is because even though Raspberry PI is possibly the most sold SBC ever (except maybe Arduino), they don’t invest in building a meaningful development platform on the device.

    Cloud platforms are a waste because... well, they’re in the cloud.

    Until ARM takes developers seriously, they will be a second class citizen. At Microsoft Build 2018, there were booths demonstrating Qualcomm ARM based laptops. They weren’t available for sale and they weren’t even attempting to seed them. As a result, 5,000 developers with budgets to spend left without even trying them.

    This was probably the biggest failure I’ve ever seen by a company hoping to create a new market. They passed up the chance to get their product in front of massive numbers of developers who would make software that would make them look good.

    Now, thanks to no real support from ARM, Qualcomm, Redhat, and others, I’ve made all ARM development an afterthought.

    1. JBowler

      Re: There has been progress

      Use gentoo with openrc. If you are doing dev you don't want a GUI, waste of space; the target devices don't have GUIs. The gentoo ARM guys are pretty damn good.

      John Bowler <jbowler@acm.org>

    2. jelabarre59 Silver badge

      Re: There has been progress

      The Linux vendors (especially Redhat) refuse to support ARM for development on any widely available SBC. This is because even though Raspberry PI is possibly the most sold SBC ever (except maybe Arduino), they don’t invest in building a meaningful development platform on the device.

      I think part of the problem with RHEL on rPi is underlying functionality needed by the enterprise-level systems Don't ask the specifics, it's a bit beyond me, but if Fedora can work, maybe Arm SBC development and RHEL could eventually meet.

      1. Anonymous Coward
        Anonymous Coward

        Re: There has been progress

        "I think part of the problem with RHEL on rPi is underlying functionality needed by the enterprise-level systems"

        Is that an assumption, or has anyone here actually read (and maybe even tried)

        https://www.suse.com/products/arm/

        (yes it's SUSE and not RHEL)

        "SUSE Linux Enterprise Server for Arm is an enterprise-grade Linux distribution that is optimized for unique 64-bit Arm chip capabilities. It enables solution providers and enterprise early adopters to gain faster time to market for innovative server and Internet of Things (IoT) device solutions. SUSE Support services options include up to 1-hour response times and 24 x 7 access to reduce problem resolution time for mission-critical operations, as well as access to updates and fixes. High-performance computing (HPC) features for Arm processor-based systems are available with SUSE Linux Enterprise High Performance Computing."

        and/or read (and maybe even tried)

        https://www.suse.com/documentation/suse-best-practices/singlehtml/sles-for-arm-raspberry-pi/sles-for-arm-raspberry-pi.html

        "Introduction to SUSE Linux Enterprise Server for ARM on the Raspberry Pi"

        or, not far from here, even read (and maybe even followed up on)

        https://www.theregister.co.uk/2018/03/29/suse_raspberry_pi_linux_enterprise_server/

        or (last one for now, I promise):

        https://pimylifeup.com/news-suse-linux-enterprise-server-for-raspberry-pi/

        I haven't, yet, because other, non-geeky, stuff has taken priority. But I hope to, when the opportunity arises.

  25. LateAgain

    Will we ever go back to processors that do sensible machine code?

    Just asking since the 286/386/486.... series was horrible enough to put me off machine code.

    The Motorola stuff was better.

    1. kirk_augustin@yahoo.com

      Re: Will we ever go back to processors that do sensible machine code?

      Exactly. The x86 is the worst register and instruction set I have ever seen. Almost anything would be better, and Motorola certainly always was much better.

  26. Anonymous Coward
    Anonymous Coward

    We need architecture diversity for gaurd against vulnerabilities

    We need atleast two processor architectures to guard against future architecture based vulnerabilities. I know Specture and Meltdown affected both prominent platforms (x86 and ARM). Ok the solution may not be ARM but a CPU significantly different to guard against this happening again. Hopefully Power, MIPs or SPARC may see an resurgence. I am hopeful of Power.

    1. doublelayer Silver badge

      Re: We need architecture diversity for gaurd against vulnerabilities

      The theory being that when one is found to have a vulnerability, we switch over to another one and wait for that one to have a vulnerability? I want multiple architectures to prevent the monoculture problem, but I don't see how this helps us in the event of a security problem. Whenever there is a problem in security, the issue is rarely the processors being purchased now, because they can hold back on them until they've fixed it, but all the old ones that are running vulnerable in the field.

  27. J.G.Harston Silver badge

    I have four Arm-based computers here on which I toil away on producing, testing and documenting code.

    1. Anonymous Coward
      Anonymous Coward

      Re: "producing, testing and documenting code."

      "I toil away on producing, testing and documenting code."

      There's the problem then (or two of them).

      See, testing in the 21st century is a job for customers, and documenting is a rather 20th-century concept.

      On x86 boxes, unexpected system behaviour can typically be blamed on Windows defects, or hardware defects, or both, in a marvellous game where everyone blames someone else and no one is ever held accountable.

      It's harder to blame Windows on a box that doesn't (can't) run Windows.

  28. david 12 Bronze badge

    Transmeta

    Linus has personal experience with an alternative hardware platform that wasn't a success because it didn't attract a market and wasn't a good model for "replacing the Intel family". I do not doubt he is calling it as he sees it, but I have no idea if he's calling it correctly.

  29. John Savard Silver badge

    Development Platform Availability

    Can ARM be trusted for deployment on servers, when there aren't even any ARM development platforms around?

    That's a good question to ask. That is a gap in the ecosystem.

    But if you've got an ARM cpu that's powerful enough to put in a server, then clearly non-rack-mount motherboards can also be developed for it.

    Also, while code written in C is at risk of biting you if run on different hardware, code written in FORTRAN or Pascal... not so much.

    So I don't see any problems with making the transition gradually. While x86 has a big lead, and some of its competitors have fallen by the wayside - no one is expecting SPARC or PowerPC or Alpha to come and take the world by storm any longer - I think it's too early to say that the x86 lead is forever.

  30. Lomax
    Flame

    Master and Server

    I'm not even sure that "server" and "client" is a useful distinction anymore; I've got dozens of "computers" to look after and they are all "servers" in some respect. They all run some flavour of Linux, and between thumb & finger 75% are ARM based. Torvalds may be something of a god in my household, but on this he is very, very, very, wrong. Beginning to show his age, perhaps?

  31. fredesmite
    Mushroom

    ARM "servers" are a dream .

    Intel has killed off scores of other server class processors : MIPS, Power PC , Sparc; at HP, IBM .. Oracle ... simply due to the incredible hardware costs of maintaining and developing new platforms . The Big Iron companies hate having to pay Intel for server chips .. but they hate even more creating new platforms, so they are test driving ARM to keep some sort of R&D spirit alive ... but in the end when the final $$ tally is done they will all resort to mass produced , sweat shop manufactured garbage from China built on Intel reference schematics .. Just keep adding NUMA sockets !

  32. JBowler

    I agree, I've always developed on ARM, well, since 1993

    1993, when I first got my hands on one.

    Now it is true I was cross developing then because I was writing for one OS on a different OS (like for ARX on something cobbled together but still on an ARM). That was a disaster area.

    These days I just use gentoo. Three or four years back there was a big problem because of several enormous piles of do-do all called \*wekbit\*; way too much memory for an RPi. There again a couple of months ago my attempt to build Mr Torvald's nut terminally crashed my x86 gentoo machine several times; the "kernel" whacked out with a simple sequence of reproducible steps:

    1) make oldconfig

    2) make --jobs

    3) PSOD

    I was running it under KDE of course, and it was (and is) booting via OpenRC, so his majesty might feel I was slightly disloyal (Off With His Head!)

    VHS won the battle, and I guess x86 has too.

  33. ysth

    "follow-up"?

    The purported follow-up post is earlier in the thread.

  34. kirk_augustin@yahoo.com

    Torvalds obvious wrong

    If Torvalds was right, then we should still all be programming for CP/M on a Z80 processor.

    Clearly the x86 register and instruction set really, really sucks.

    It is the worst I have even seen, and it should never have lasted nearly this long.

    There have always been better register and instruction sets, along with better and faster processors.

    And the whole point of programming is that you right in an abstract language, like C or C++, and it is the compiler that makes up for the actual hardware differences.

    There is not a single reason to stick with x86 hardware at all.

  35. MemRegister

    Moore's law is done.

    Cool stuff only will happen at warehouse-scale machines. Btw, an interesting analogy regards to it, if my mobile phone was made with 90's technology it would be a machine the size of the Empire State Building. How cool is to have a machine like that and nobody to call?

  36. James Anderson

    Sitting on the fence.

    Linus does have a valid point as far as kernel development goes. For x86 you configure and compile a kernel once and it will run on any similarly speced x86 hardware anywhere. For arm you need to configure and build not just for every chip set, but, for every board. The flexibility of the arm design becomes a handicap when deploying an OS.

    On the other hand most application level code these days is written in php, Python or Java and it really does not matter which platform you deploy to. If you are unfortunate enough to develop in Enterprise Java you will be stuck with deploying on the exact same release as you developed and tested on but you could easily deploy on different hardware if the specufic J2EE version was available.

    1. Roland6 Silver badge

      Re: Sitting on the fence.

      >Linus does have a valid point as far as kernel development goes.

      But only for those elements of the kernel that are best written in assembler; if you can write it in C then Linus's argument starts to fall apart. I suspect part of the problem is that much of the Linux kernel is now written in assembler or in C with data structures that replicate many of the data structures used by the Intel x86 family... But then this problem was solved by Unix, which also was written in C and readily ported to many different processor architectures...

      So I think what Linus is actually saying: I'm not going to work on anything other than Linux x86, if someone else wants Linux on ARM (or other) then they are free to do it. which given the effort necessary to keep just the x86 variant maintained and moving forward is probably a reasonable decision.

      However, the problem with this approach is that the underlying assumptions subtly change and soon developers assume Linux is only x86 and so code that doesn't need to know about what processor it is running on, starts to include processor dependencies; making any port to a new processor architecture (or a revised x86 architecture) more difficult.

  37. TechnicalBen Silver badge

    False flag waving?

    Is he putting up the "I surrender" flag, knowing someone else will fight the battle later?

    Say, someone with pple in their name, and an A at the beggining?

    ;)

    If Apple go 100% ARM next round, they will have their own ARM kernals. I would assume, while wanting to help Linux in general run on as much as possible, Linus does not want to be doing Apples work, to put billions in their pocket, and risk them turning it back on him (though if Linux has no round corners, less likely).

    Not a conspiracy suggestion, just a concentration of goals and efforts into what is most needed and provides the low hanging fruit (Apples :P ).

    1. Dan 55 Silver badge

      Re: False flag waving?

      Apple's OSes are BSD based and are already working on ARM.

  38. Børge Nøst

    We're looking into cloud services at the moment, but our conclusion is that if you are only looking at it as a server somewhere else, you're doing it wrong.

    Unless you dig deep and use each and every service available you're probably just paying more for servers compared to what you would do in-house. And services can run on any architecture if you're talking to a network API.

    So I'm not sure Linus is barking up the right tree...

  39. jelabarre59 Silver badge

    Other than Arm

    Might surprise people to know that Arm isn't the only non-x86_64 architecture with active Linux kernel development out there. Power8/9 is actively being worked on too, especially in it's 'little-endian' form. Granted, not likely to see many 'hobbyist hackers' on ppc64le, it's more the opposite end, with x86_64 being the middle-platform.

  40. Ima Ballsy
    FAIL

    What he really said was:

    We spoke about these issues afterwards via e-mail and Torvalds doubled down on the need for ARM PCs. Torvalds said: "my argument wasn't that 'ARM cannot make it in the server space' like some people seem to have read it. My argument was that 'in order for ARM to make it in the server space, I think they need to have development machines.'

    See:

    https://www.zdnet.com/article/what-linus-torvalds-really-thinks-about-arm-processors/

    Sorry El REg ....

  41. Anonymous Coward
    Anonymous Coward

    One day Linus will get a job in a large computer company and understand what commercial software and system engineering is. Until then, it's just Linus in his bedroom being king of his little patch of turf, making royal decrees - all based on no real world experience.

    1. Anonymous Coward
      Anonymous Coward

      Yet the world runs linux. I assume you think you don't run any linux boxes, chances are you got more linux boxes at home than windows one. Granted they are embedded, but Linux nevertheless.

  42. Anonymous Coward
    Anonymous Coward

    Has anybody thought of running code directly on RISC instruction set that AMD x86s are based one

    I don't know if this holds true, or holds true now. Years ago I read AMD x86s are RISC architecture that translates x86 (and I assume x86_64) machine code to its own internal formal. Now I don't know if I misunderstood (or the article writer misunderstood) microcode running on the AMD. Which is not of course unique to the AMDs.

    If AMDs run some sort of intermediate machine code, you use that directly?, in other words can you compile to AMDs RISC instruction set directly rather than x86_64 ASM.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019