back to article Top Microsoft bod: ARM servers right now smell like Intel's (doomed) Itanic

Microsoft is unlikely to use ARM-compatible processors in a meaningful way in its data centers – unless there is a huge change in the software ecosystem around the non-x86 chips, a top Redmond bod told The Reg. Though Microsoft is closely watching developments in the ARM world, it's unlikely that the Windows giant will be one …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    It's not Itanium, it is a proven architecture. A mature CPU family.

    Itanium was just a crap architecture, as Linus said, they threw away all the good parts of x86.

    1. DainB Bronze badge

      I do remember what a crap Sun Niagara CPUs were first years after they were released, half of the time they could not even finish Solaris install without hanging. So no, ARM is not a proven architecture anywhere outside your mobile, certainly not in servers and it won't be there for years.

      1. ThomH

        @DainB

        On the contrary, ARM is a proven architecture in a way Itanium definitely wasn't. Itanium bet on very long instruction words (VLIW): each instruction was a compound built of the instruction for every individual unit in the CPU.

        Famously the Pentium has separate integer and floating point units. If it had used VLIW then each instruction would have specified both what the integer unit should do and what the floating point unit should do to satisfy that operation.

        So the bet was that compilers are better placed to decide scheduling between multiple units in advance than a traditional superscalar scheduler is as the program runs. The job of organising superscalar dispatches has become quite complicated as CPUs have gained extra execution pipes so why not do it in advance?

        The idea of VLIW had been popular in academia but had never succeeded in the market. So: the Itanium architecture was unproven in a way that ARM most definitely isn't.

        In the real world the solution to the problem VLIW tackles has become to cap single cores at a certain amount of complexity and just put multiple cores onto each die.

        1. DainB Bronze badge

          Re: @DainB

          "The idea of VLIW had been popular in academia but had never succeeded in the market. So: the Itanium architecture was unproven in a way that ARM most definitely isn't."

          You're completely wrong here, internals of Itanuim has nothing to do with the fact that they decided develop CPU that did not have any OS or application support and was not compatible with de-facto commodity server standard which x86 was turning into. HP tried to position them to be in enterprise grade (read expensive) segment and compete with IBM and Sun which had ecosystem, OS and applications and all three failed to recognize that this segment of market is rapidly shrinking. AMD being second mouse decided do the same but keep backward compatibility with 32-bit x86 and this was exactly why we now have CPUs that can run both 64 and 32 bit apps seamlessly..

          If you think that ARM can bite piece of Intel server market I can easily predict that as an answer to that Intel can easily package gazillion Atom cores into one CPU and market will have to choose between equally power efficient CPU with no software whatsoever and CPU with all software you can ever imagine. I even know who will win.

          Also correct me if I'm wrong but ARM for now has only one advantage over Intel and this is power consumption (see above about just as efficient Atoms, but lets imagine they do not exist). While it's a critical parameter for mobile, in servers it is one people consider among the last.

          1. Richard Plinston

            Re: @DainB

            > Also correct me if I'm wrong but ARM for now has only one advantage over Intel and this is power consumption

            You are wrong.

            Intel builds chips to suit its own marketing needs. ARM chips can be designed and built by companies to suit _their_ needs. For example chips can be designed to suit a specific workload on servers.

            ARM also has an advantage on price. A few years ago, before significant numbers of smartphones were being sold, ARM said that a billion ARM chips were being made every year and the cheapest were being sold in quantity for 50 cents each. Now that quantity has more than doubled.. The Raspberry Pi can be purchased for $25 and that is a complete system.

            1. DainB Bronze badge

              Re: @DainB

              "ARM chips can be designed and built by companies to suit _their_ needs. For example chips can be designed to suit a specific workload on servers."

              You know difference between closed architecture of mobile and open architecture of server, don't you ?

              "ARM also has an advantage on price."

              Over what ? Xeon ? How about compare something that can be compared - ARM and Atom. Price difference will be measured in dollars if it all.

              "The Raspberry Pi can be purchased for $25 and that is a complete system."

              For that money it does not even come with case and power supply, what complete system you're talking about ?

              1. oldcoder

                Re: @DainB

                Doesn't really need a case...

                It comes with everything in a PC - USB, video, memory, CPU, disk..

                It is designed to be an educational tool, and it turns out, can also be used for laptops, entertainment systems, home security systems ...

                And even small servers.

              2. Vic

                Re: @DainB

                > Price difference will be measured in dollars if it all

                On a 50c part?

                Yes, I fear you might be right...

                Vic.

            2. itzman

              Re: @DainB

              I think this is in the end the key USP, that ARM can be integrated with other chipsets on die, rather than on-board.

              In the case of servers that means integrated with communications to other chips, machines and to other devices like storage.

              The uniqueness of such a solution doesn't percolate up to the application - just as far as device drivers in the OS and the core OS multitasker.

              So 'generic ARM/linux' apps should run on such hardware once the actual device drivers and OS port has been done to that platform.

              And servers tend to run less apps and more heavyweight apps, where the cost of porting/testing is small compared with the overall cashflow of what is being run on them.

          2. Anonymous Coward
            Anonymous Coward

            Re: @DainB

            "they decided develop CPU that did not have any OS or application support and was not compatible with de-facto commodity server standard which x86 was turning into" (and the rest).

            Very poor analysis. Go to the back of the class and re-review the history.

            The only reason that Itanium sold anything worthwhile AT ALL was that it DID have OS support, largely from HP (and, after a very short while, ONLY from HP - why would a sensible Linux or Windows customer not choose x86-64 over IA64).

            If exjsting customers of HPUX and VMS and Tandem NSK (all of which ended up being owned by HP) hadn't been forced to buy Itanium simply to preserve their existing software investment on new boxes, nobody would have cared two hoots about Itanium. Most people didn't care anyway. RIP IA64. RIP HP?

            1. Anonymous Coward
              Anonymous Coward

              Re: @DainB

              The poor analysis is yours. Itanium couldn't run but application recompiled for it. Everything else was running in emulation mode. Itanium was available well before x86-64 was available, because Intel wanted 64 bit to be Itanium, not x86. But only an handful of applications were available for Itanium. Sure, if you wanted to run Oracle you could, but not much more on that box. Many custom applications should have been ported to Itanium, but it was an investment just a few did.

              Because of this, and because hw was expensive, it was appealing only for high-end setups with dedicated large box, not for those using x86 servers for a plethora of different tasks, and maybe moving older servers to less important tasks. So most just stick with x86, even if it was 32 bit only (but then it was often adequate but for large workloads), and when AMD was able to force Intel to adopt its x86-64, well, Itanium was doomed.

              In very many ways Itanium followed the fate of the Alpha chip which was one of its main ancestors.

              1. Anonymous Coward
                Anonymous Coward

                Re: @DainB

                "Itanium followed the fate of the Alpha chip which was one of its main ancestors."

                Alpha initially had a compelling architectural differentiation (primarily, 64bit address space, with others less obvious too).

                Alpha initially had a compelling performance advantage (especially for FP-intensive applications).

                Alpha eventually had multiple sources of chips, multiple designers of chips (Samsung were a design and manufacture licencee) and multiple system builders independent of the chip vendor.

                Alpha had multiple operating systems.

                Alpha was in principle as appropriate for a desktop (or a VME/PICMG card) as it was for a high end server.

                Alpha had a technical future; it was canned for political reasons.

                How many of those did Itanium have? (ARM's got all the relevant ones)

                Alpha had an idiot running the company in charge of chip (and most system) development, an idiot who mistakenly chose to trust Intel ("IA64 will be technically far superior to Alpha") and Microsoft ("yes we'll do NT/Alpha properly") rather than trust his own company's people.

                When Itanium eventually arrived it had no compelling architectural difference, no performance advantage, no 2nd source, no ecosystem of chip and system builders, no realistic chance of being a desktop chip (too big and expensive). It had initial backing from HP's VLIW people and Intel's VLIW people. That worked well for them didn't it, once the Itanium cash pile inevitably ran out.

                "Many custom applications should have been ported to Itanium, but it was an investment just a few did."

                Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware. What did IA64 bring most of them, even if the port was trivial? (can't speak for HPUX).

                IBM still manage to make chips of their own in systems of their own. Not HP though. Even HP's Project Moonshot seems to have gone quiet for now. HP = clueless.

                "when AMD was able to force Intel to adopt its x86-64, well, Itanium was doomed."

                Certainly was. Who could ever have seen that one coming?

                1. Alan Brown Silver badge

                  Re: @DainB

                  Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware. What did IA64 bring most of them, even if the port was trivial? (can't speak for HPUX).

                  There was a positive discincentive. We decided not to update our Alpha VMS systems to Itanium because it was so fricking expensive. The alpha boxes are still going but their disks are now dying and once they're gone, they'll be gone forever. So much for "VMS until 2036"

                  1. Matt Bryant Silver badge
                    Boffin

                    Re: Alan Brown Re: @DainB

                    "Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware...." The big incentive for us moving our Alpha-VMS GS1280 Oracle platforms to Itanium-VMS was the ability to use new interconnects like PCI-e and therefore faster SAN and networking. Our production bottlenecks weren't in processing power but in disk and LAN, though the development of the Itaniums meant they also out-performed the Alphas, especially as the later generations upped the core count. I guess from hp's point of view it saved on development costs by having one platform for VMS, hp-ux and NonStop (and RHEL and Windows IA64 versions for a while), and then saved even more by having it in the same blade chassis with the same software tools as their x64 blades.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: Alan Brown @DainB

                      "The big incentive for us moving off our Alpha-VMS GS1280 ... the ability to use new interconnects like PCI-e and therefore faster SAN and networking."

                      Right. So if VMS had run on AMD64 on Proliant rather than solely Itanium, that would have suited you just fine? Or if Alpha systems had had faster IO (how hard could that be)? But you had no needs that actually architecturally required IA64?

                      "Our production bottlenecks weren't in processing power but in disk and LAN"

                      Right. So if VMS had run on AMD64 on Proliant rather than on Itanium (or even on an Alpha with updated IO), that would have suited you just fine?

                      "though the development of the Itaniums meant they also out-performed the Alphas, especially as the later generations upped the core count."

                      Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years. Gee whoda thunk it.

                      If the same amount of money as was eventually wasted on IA64 had been invested either on Alpha (chips and systems) or on porting VMS (and Tru64? and NonStop) to selected AMD64 Proliants, where would HP be today? HP themselves would probably still be in the sh1t, so let's think about where ex DEC and ex Tandem customers would be :)

                      As it happens, HP have had to finally admit that NonStop will end up on AMD64 [1].

                      "saved even more by having it in the same blade chassis with the same software tools as their x64 blades."

                      Saved earlier too, But no, IA64 was going to be the industry standard 64bit architecture.

                      It had to be true, Intel said it. Oh wait. As any long term observer knows, that's Intel, the x86 company.

                      So, they wait till IA64 inevitably collapses. Then they do what should have been done a long time previously, effectively admit that IA64 was nothing more than an extended field test paid for by customers. And move on.

                      HP = clueless.

                      [1] NonStop on x86

                      Here: http://www.theregister.co.uk/2013/11/04/hp_to_port_nonstop_to_x86/

                      HP press release: http://www8.hp.com/us/en/hp-news/press-release.html?id=1519347#.UtK81PtNwSk

                      1. Matt Bryant Silver badge
                        Facepalm

                        Re: AC Re: Alan Brown @DainB

                        "....So if VMS had run on AMD64 on Proliant rather than solely Itanium, that would have suited you just fine?...." Yes, and it would probably have meant cheaper hardware, though the software support and application costs would still remain the same. Scaling x64 socket-wise may have been an issue for hp (or Compaq), whereas Itanium was already 16-socket-ready with Merced and soon 32-socket-capable. I pushed the original AMD64 chips for Linux hard when they came out but, TBH, AMD seem to have lost their way a bit recently and Intel's Xeon currently walks all over them for the majority of uses.

                        "....Or if Alpha systems had had faster IO (how hard could that be)?...." With the declining Alpha market it would have meant the cost of Alpha development and product could only increase. That shrinking market meant the motherboard development for Alpha to include new memory and interface technologies was simply going to be giving a smaller return. Hp Integrity was not only faster but also cheaper (though nowhere near as cheap as x64-based servers), and the wider market meant it spread the development cost for hp. SImple economies of scale.

                        "....Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years....." Not true. Itanium2 absolutely walked all over even EV7 in every test we did. Even from an early stage there were performance cases where hp-ux on Merced was faster than VMS on Alpha. Alpha's early big commercial advantage was in clustering, but Serviceguard (and Oracle RAC) largely removed any advantage long before Alpha was killed. Sorry if you are one of those fanatical Alpha dinosaurs that missed the news but the hp-ux market was bigger than the VMS and Tru-64 markets combined, and since it would have been far easier to port VMS to Itanium than hp-ux to Alpha (even if Alpha hadn't been killed by Compaq before the merger), hp simply looked at the market figures and went with hp-ux which meant Itanium.

                        ".....HP have had to finally admit that NonStop will end up on AMD64...." There is very little point in hp trying to put either VMS or future hp-ux up against commercial Linux on x64. I see it is unlikely any of the UNIX vendors (even IBM) will try and push their commercial UNIX variants on x64, not after they have seen Sun choke trying the same. But NonStop still offers an unique capability that commercial Linux cannot match, and does so into a cash-rich niche that will allow the resultant deals to carry a higher margin than standard x64 offerings, so hp is probably happier to let that go up against Linux in direct competition. You do understand what NonStop does, right?

                        "....HP = clueless...." LOL, they seem to be alot more alive than old MIPS competitors, or Digital or Sun. Try not to be so bitter. You may have noticed that hp are developing ARM kit too....? Oh, sorry, you were probably too busy wallowing in memories of glories past.

                        /SP&L

                        1. Roo

                          Re: AC Alan Brown @DainB

                          ""....Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years....." Not true. Itanium2 absolutely walked all over even EV7 in every test we did."

                          I can believe that, but it also proves the man's point that the EV7 had suffered from a lack of development. EV7 taped out in 2001 (it was 2 years late under HPaq's tender loving care, the core was the same as the 1996 vintage EV6), HP shipped boxes in 2003 (fabbed at 180nm).

                          McKinley (Itanic 2.0) taped out in 2002 @ 180nm, so I would expect it stomp all over a 1999 design with less than half the cache.

                          Madison (Itanic 2.1) shipped in 2003 @ 130nm. If Intel can't beat a 1996 vintage core fabbed on a bulk process at this point they may as well have quit the business.

                          I think people forget just how heavily delayed EV6 and it's successors were (legal action, corporate take-overs). Given the difficulties and constraints faced by the Alpha's engineers I think that it is a miracle that they remained competitive for as long as they did.

                2. JEDIDIAH
                  Linux

                  Re: @DainB

                  The question isn't one of portability. If you build your platform tools on a particular architecture, then porting apps is a relatively trivial affair. This is how different Unix platforms feed off of each other. They are all very similar and a relatively easy port versus something entirely else.

                  The real problem is one of comittment. Microsoft had an Alpha port. Sun had an early x86 port. Neither of those was really supported as well as they should have been so they kind of languished and deteiorated.

                  That's why Windows ARM tablets are so tricky. There's no software for them. The 30 years of Win/DOS legacy apps aren't there.

                  3rd parties have to be willing to port stuff. Microsoft saying "make it so" won't change anything. Although there isn't any good reason that Microsoft's (or Oracle's) flagship apps couldn't be ported to whatever-on-ARM.

                3. This post has been deleted by its author

                4. Destroy All Monsters Silver badge
                  Headmaster

                  Murder most foul: A game of ... processor architectures

                  AC says: Alpha had an idiot running the company in charge of chip (and most system) development, an idiot who mistakenly chose to trust Intel ("IA64 will be technically far superior to Alpha") and Microsoft ("yes we'll do NT/Alpha properly") rather than trust his own company's people.

                  Let's be precise here. This is a rich and deep history, filled with tragedy:

                  0) 1989: HP investigates VLIW instruction sets, called EPIC

                  1) February 25, 1992: Alpha architecture mentioned for the first time ; Alpha is on the front page of CACM february 1993 (test system pulled 1kW as I remember);

                  2) Serial Big Mistakes by DEC under Kenneth Olsen in marketing Alpha (and anything else) ;

                  3) DEC is increasingly going pear-shaped, sheds divisions ;

                  4) HP partners with Intel to build the IA-64 as it judges that proprietary microprocessors are not the future; goal is to deliver "Merced" by 1998 ;

                  5) May 18, 1998: After a hard patent battle with Intel, DEC agrees to support IA-64 architecture and Intel agrees to manufacture DEC's Alpha processors. Sour Alpha engineers leave for AMD and Sun. Intel has acquired StrongARM from DEC but those engineeres also leave ;

                  6) The rump of DEC gets acquired by Compaq, which has only modicum of interest in Alpha. They are into growing their PC market.

                  7) Compaq's PC shit deflates bigtime, leaving unsold inventory rotting at the docks, Eckhard Pfeiffer ousted. CEO Michael Capellas is uninterested in Alpha, stops development of NT on Alpha on August 23, 1999. Samsung and IBM sign Alpha manufacturing deals with Compaq.

                  8) March 2000: Easy-money fuelled dotcom bubble bursts, Greenspan unrepentant, gives a f*ck.

                  9) June 25, 2001: Compaq announces complete shift of their server solutions from Alpha to IA-64 architecture by 2004. The Alpha Microprocessor Division is disbanded and moved to Intel. Samsung and IBM stop producing Alphas. Andrew Orlowski had one or two good ones on Don Capellas' Alphacide back in the day.

                  10) June 2001: "Itanium" (i.e. Merced) released by Intel ; where are the compilers ?

                  11) September 3, 2001: Hewlett-Packard announces its intention to acquire Compaq (because HP's Carly Fiorina couldn't acquire PricewaterhouseCoopers for $18 billion USD, so this must be a displacement activity). A long battle to convince the HP shareholders that this is actually a good idea begins.

                  12) September 11, 2001: Bush is minding his own business cutting shrubberies in Crawford, Texas, when suddenly a great call for Glorious Presidency is heard across the soon-to-called-thus "Homeland". God bless!

                  13) October 21, 2001, API (the workstation manufacturer) throws in the towel on Alpha systems and transfers all rights to support Alpha systems (including warranty service) to Microway,

                  14) May 2002: Merger of HP and Compaq is given the go-ahead. Don Capellas becomes president of the post-merger HP, but soon moves on to the burning crater of MCI Worldcom (USD 11 billion of fraud, a record back then) on November 12. 2002, to lead its acquisition by Verizon.

                  15) In August 2004, the last Alpha processor was announced.

                  THE END!

              2. Anonymous Coward
                Anonymous Coward

                re: Itanium failure

                IIRC Itanium platforms were very expensive, so limited market.

                That's what doomed it to failure IMO.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: re: Itanium failure

                  Cooling and power consumption too, plus crap slow x86 emulation.

            2. Joe Montana

              Re: @DainB

              IA64 had pretty good Linux support, and if your workload was entirely based on open source software then there was no technical reason you couldn't run it on IA64... If you depended on any closed source software then IA64 was typically not an option, as most closed source vendors would typically not port their stuff to IA64.

              The problem boiled down to price, all of the IA64 hardware that was available cost more and consumed more power than comparably performing x86 and x86-64. I would have seriously considered IA64 for my workloads had it been price competitive with x86.

              For ARM this doesn't need to be a problem, if they can make servers which are competitively priced then they should sell just fine.

              1. Anonymous Coward
                Anonymous Coward

                Re: @DainB

                "if your workload was entirely based on open source software then there was no technical reason you couldn't run it on IA64"

                AND your workload and organisation was the kind of workload and organisation where in-house support was considered appropriate. So by that logic it might fit some scientific stuff, but maybe less so with mainstream commercial stuff, Orrible, etc, where the PHBs are more comfortable having someone they can call, someone with an SLA, etc (even if it's a useless SLA).

                "For ARM this doesn't need to be a problem, if they can make servers which are competitively priced then they should sell just fine."

                Absolutely. High volume server users (the likely interested ones in the early days) likely aren't going to be at all fazed by having in-house support for their business-critical software. And as time goes by, we get the same kind of momentum building up around ARM Linux servers as Microsoft used to have with their network of Certified Microsoft Dependent Systems Engineers and such.

                Interesting times.

        2. Gordan

          @ThomH

          "So the bet was that compilers are better placed to decide scheduling between multiple units in advance than a traditional superscalar scheduler is as the program runs. The job of organising superscalar dispatches has become quite complicated as CPUs have gained extra execution pipes so why not do it in advance?"

          The problem is that most compilers are really crap, and even when they aren't most developers aren't educated enough in how to leverage them to tage advantage of them.

          See here for an exampler of just how much difference a decent compiler makes:

          http://www.altechnative.net/2010/12/31/choice-of-compilers-part-1-x86/

          The similar problem was faced back in the day of Pentium 4, which had a reputation of being a very poor performer, when in fact, with a decent compiler it outperformed the Pentium 3 by a considerabe margin.

          Or to put it differently - it wasn't a crap processor, it was the software developers that were too incompetent to use it properly.

          1. Lennart Sorensen

            Re: @ThomH

            The P4 was a bad design. It was obviously designed to aim for the biggest clock speed number because that's what intel marketing wanted. The fact it was lousy at running existing x86 code that had been optimized following intel's own recommendations didn't matter to intel. As long as consumers were buying the machine with the biggest GHz number, intel was happy with the P4. Only when they ran into leakage issues and overheating problems and discovered they wouldn't be able to scale to 10GHz like they had planned did they throw the design away and start over using the Pentium-M/Pentium3 design and create the Core 2 by improving on the older design. Only the new instructions from the P4 were carried over, netburst was dead and well deserved. The Opteron/Athlon64 destroyed the P4 in performance on typical code at a lot lower clock speed and power consumption. intel has made a number of stupid blunders over the years, but they do eventually admit when things don't work and recover quite well by changing course and they have the resources to pull it off. x86 will be around from intel long after the itanium is gone.

      2. Dan 55 Silver badge
        Coffee/keyboard

        @DainB

        Leaving the Itanic distraction aside (when MS can't do something they bad mouth it) and going back to "ARM is not a proven architecture anywhere outside your mobile"...

        http://www.theregister.co.uk/2012/06/01/acorn_archimedes_is_25_years_old/

        "it felt like the fastest computer I have ever used, by a considerable margin"

      3. Anonymous Coward
        Anonymous Coward

        Acorn Archimedes?

        If there's any problem with ARM it is the fact that every SOC is different.

        1. Richard Plinston

          > If there's any problem with ARM it is the fact that every SOC is different.

          And when you look at x86 _systems_, which include GPU, RAM controllers, network, USB, etc controllers you find that every _system_ is different.

          That is why there are hundreds of device drivers.

        2. Charles Manning

          "If there's any problem with ARM it is the fact that every SOC is different."

          If there is a problem with Intels it is the fact that SOCs don't exist.

          The whole reason all those ARM SOCs do exist is because ARM is easy to design around and is worth designing around.

          The reason Intel SOCs don't exist is that Intel keeps all its designs private and plays "Father knows best".

          Market forces determine the rest...

      4. Voland's right hand Silver badge

        It is "outside mobile"

        Quote "outside your mobile" - it is actually used more outside the mobile. It is nearly everywhere around us. Arm MCs are now so cheap that nobody bothers with the "smaller, more embdded and more efficient" varieties as they are more expensive. Your hard disk is ARM, your car ECU if it is made in the last 5 years is probably arm too (it used to be PPC). Your fridge MC, your washing machine, your dishwasher controller - you name it. The bloody thing is everywhere. The only place where it is still not dominant is home routers and wifi APs - that is the sole MIPS holdout.

        It is a proven architecture provided that you are happy to move your workload around as _SOURCE_. That is what Microsoft does not like here - you cannot move a binary workload efficiently from Arm to Arm. You either need an optimised runtime per hardware version (as in Android) or you need to recompile it. While the basic instruction may be the same (or may be not - Razzie being an example of a violation - its fpu aspects), all offloads and all accelerations are very hardware specific. Just look at the various /proc files and the irq list on an arm device on linux and weep (I do it regularly when my Debianized chromebook pisses me off and I need to debug something on it).

        As far as arm 32 vs 64 the difference is not that staggering - it is an evolutionary step - same as amd64 vs i386. Considering that 64 is already going into high volume devices I would not expect that to be an issue with regards to arm acceptance and overall architecture stability anyway.

      5. Lennart Sorensen

        The Sun Niagara is a Sparc, not ARM. Sparc itself is fine, the Niagara design, not so much.

        All the ARM chips so far have been perfectly sane. Itanium was not a sane design for a general purpose CPU. it was assuming compilers would become able to do compile time scheduling of parallel instructions, and that didn't happen. I vaguely recall seeing a paper a few years ago that actually proved it can't be done, so what intel hope for is actually impossible if I recall correctly. And if I recall incorrectly, it is still a very hard problem that has not been solved. So as it stands, the itanium is a terrible CPU design and rightly deserved to die. It is an enourmous shame that it caused MIPS to give up designing high end chips, made the Alpha go away, and certainly hurt sparc (powerpc seems to be doing OK). I don't personally miss PA-RISC.

    2. justincormack

      Most of these people are talking about ARM64 which is actually a brand new Risc architecture with little connection to ARM as we no it, it is even a separate Linux port, not part of the ARM tree, although Linus did want them to merge it.

      1. Lennart Sorensen

        ARM64 (well aarch64) is very much a 64bit extention to the existing ARM 32bit design a lot like AMD extended x86 to 64bit. All 64bit ARM chips are perfectly able to run existing 32bit ARM code and a 64bit ARM linux system will also run 32bit arm applications with no changes or recompile needed. This is not a new thing. Sparc went from 32 to 64bit, PowerPC did it, Mips did it, x86 did it, PA-Risc did it, and now ARM is doing it. Nothing complicated about extending an architecture from 32 to 64bit while maintaining backwards compatibility. Only a few architectures were 64bit from the start (Itanium and Alpha that I can think of).

    3. Anonymous Coward
      Anonymous Coward

      > It's not Itanium, it is a proven architecture. A mature CPU family.

      PowerPC was, too. Apple had to shift to Intel CPUs for its PCs...

      The problem is not if ARM is a good CPU or not (Itanium was too), is if it can gain enough support and then market share in the data center.

      x86 is not the best CPU design around - someone remembers when it was common to say "RISC CPUs are far better then Intel CISC ones! They are the future!", and we're still on x86 - but it gained so much support it's the most widely used architecture around.

      Can ARM succeed where MIPS, Alpha, PowerPC, Itanium and others failed? It has nothing to do with "superior technology" only.

      1. Hi Wreck
        Thumb Up

        Re: LDS

        Intel machines translate most instructions into a more RISC-like internal architecture. Note that the L1 instruction cache stores pre-decoded instructions. This is a hint. It is truly amazing that a register starved architecture can perform so well. Kudos to the Intel engineers.

        1. Alan Brown Silver badge

          Re: LDS

          "Intel machines translate most instructions into a more RISC-like internal architecture. "

          Which does make you wonder what could be achieved if the internal architecture was exposed directly.

        2. Anonymous Coward
          Anonymous Coward

          Re: LDS

          Yes, and you have additional circuitry to perform that translation. Compiler have still to optimize code taking into account the standard registers and available instructions - and even the CPU can work with shadow registers to perform some "magic", compilers can't because they don't know how many are available and how the translation works. And you can optimize for a given processor model only if you are 100% sure your application will run on it only, and you can recompile it when it is upgraded.

          Sure, Intel has been able to fend off competition improving the CPU power anyway, but it led to big, power hungry, hot processors.

        3. Anonymous Coward
          Anonymous Coward

          Re: LDS

          "It is truly amazing that a register starved architecture can perform so well."

          It is indeed.

          AMD64 and its Intel clone has twice as many registers as legacy x86. A decent x86-64 Linux will use them. Don't know about Windows.

          The trendy new(ish) ARM 64bit architecture also has twice as many registers as previous ARMs (which was already more than legacy x86 had).

          1. Anonymous Coward
            Anonymous Coward

            Re: LDS

            Maybe you don't know that 'RISC architectures always had lots of registers, more than the sixteen x86-64 has. Even 1990's RISC processors had at least 32 registers.

            Any Intel 64 bit compilers - Windos, Linux, Apple, BSD, etc. - uses all the available *documented* registers, what you can't rely on are undocumented features - even if the processor has more registers

            on silicon and uses them for internal optimizations, still no compiler can access them. And a compiler can perform a far deeper analysis of code than a CPU could do while executing it.

            One of the reason is the instruction format. You need a way to encode into an instruction what register(s) it refers to. Intel.x86 uses an instruction encoding where some bits in one of the bytes encoding the instruction identify the registers. Due to compatibility needs also, there's not much freedom in adding many registers, even if the hw design allows for it. RISC processor uses a different (and simpler, this Reduced) instruction set and encoding. The drawback it may need more instructions to execute a given task.

            But today processors are so complex the number of general purpose registers (there are many specialized others) is not the only performance metric.

        4. Adrian 4

          Re: LDS

          Makes you wonder how good a job they could do if they weren't carrying legacy that goes back to the 8080. X86 should have died long ago.

        5. Charles Manning

          Re: LDS

          It is truely amazing that this on-the-fly translation works so well.

          It also uses tons on transistors which have to be toggled at great speed. That's one of the reasons Intel sucks so much power.

        6. Roo

          Re: LDS

          "This is a hint. It is truly amazing that a register starved architecture can perform so well. Kudos to the Intel engineers."

          There are some very bright people working over at Intel, but I think you should really be giving the credit for the micro-op approach to AMD's Opteron. The register starvation problem is tackled by register renaming - which you can thank IBM's System/360 for... Interestingly some of the folks working on the Opteron were refugees from the Alpha, I guess they had their revenge on Itanic in the end. Cornering Intel into making the Itanic obsolete with Xeons is quite a trick. ;)

      2. Alan Brown Silver badge

        someone remembers when it was common to say "RISC CPUs are far better then Intel CISC ones! They are the future!",

        x86 were and are amongst the least efficient chips out there, in terms of clocks per instruction and in terms of power consumption.

        However they were _far_ cheaper than the competition and as such became ubiqutous.. In the end, that's what counted more than anything else.

      3. Lennart Sorensen

        Apple changed to x86 because no one was making powerpc chips that fit their needs. IBM was making high end server chips which used too much power for a desktop, and freescale was making embedded chips which were too slow for what the desktop needed. Nothing wrong with PowerPC itself, just the server and embedded market was vastly more interesting (and a vastly larger market) than tiny little Apple's measly desktop market. It is the same reason Apple moved from m68k to PowerPC in the first place. m68k wasn't getting faster any more.

      4. Lennart Sorensen

        Also MIPS never tried and SGI bought into the Itanium idea and killed development. It still does well in embedded markets where almost all wireless routers are MIPS based although a few are ARM. Alpha (owned by compaq owned by HP then sold to intel) was killed off, and had failed because digital had priced it out of the market to protect the VAX market that eventually was killed of by competition from everyone else instead. PowerPC hasn't failed, it does great it the markets that use it (lots of engine computers in cars are powerpc, as is lots of other embedded systems, and IBM has rather nice servers). Itanium failed because it was slow and stupid.

    4. Anonymous Coward
      Anonymous Coward

      There are good parts to x86? :-O Having done assembler in Z80, 6502, 68K; DEC, MIPS and x86, x86 was a bit of a nightmare, closely followed by Z80.

      The 68K assembler was an architecture designed for programmers, x86 wasn't. RISC is beautifully simple, x86 isn't...

    5. Dave Lawton
      Holmes

      X86 architecture

      If there are good bits to X86 architecture, where are they hiding please ?

  2. Anonymous Coward
    Anonymous Coward

    Dinosaur MS

    "It's a new technology"

    Whut? Has this dimwit been living under a rock for about the last 25 years?

    "If ARM chips do come to the data center, then it's likely Microsoft will fiddle with and perhaps embrace them,"

    Lock 'em down, restrict and ruin 'em. A la Windows RT and SecureBoot.

    The one thing MS cannot stand is the concept of competition.

    1. pierce
      Paris Hilton

      Re: Dinosaur MS

      the basic ARM instruction set has been around a long time but there is a lot more to a server than just the instruction set, the total infrastructure for a full blown ARM server is quite new. Windows needs a whole new HAL to support its memory management, DMA, IO bus enumeration, bootstrap sequence.

      1. Lennart Sorensen

        Re: Dinosaur MS

        In the server area, most people really don't care about Windows. Serious server users run linux and ARM will run that just fine already.

        1. Matt Bryant Silver badge
          FAIL

          Re: Lennart Sorensen Re: Dinosaur MS

          In the server area, most people really don't really care about anything but Windows. Serious business users run Windows on x86 just fine already.

          There, fixed it for you by adding a bit more Worldly perspective and a lot less geekboi invective. You may choose to believe otherwise, but maybe you should try reading a few IDC or Gartner reports before you pretend "real server users" are not using Windows. Or you could just peruse a few El Reg articles on the market, such as TPM's review of the analysts figures, which show how M$ is still the business platform of choice and growing in share. Enjoy!

    2. Anonymous Coward
      Anonymous Coward

      Re: Dinosaur MS

      >" "It's a new technology"

      > Whut? Has this dimwit been living under a rock for about the last 25 years?

      ARM servers are a "new" technology - not the chip itself.

      SecureBoot is a good technology. Look at the ATMs hacked booting them from a rogue USB stick. If they could boot only signed code, it could not have happened. If you want a secure machine, you need a trust chain from the boot process onwards.

      1. oldcoder

        Re: Dinosaur MS

        It would have been simpler to just disable USB booting...

        And that capability WAS in the BIOS to do.

        1. Anonymous Coward
          Anonymous Coward

          Re: Dinosaur MS

          No, because if you can reboot the machine you can be able to turn it on again. Even BIOS passwords may be reset. In many situations when security has paramount importance, being able to ensure that no matter how only allowed code is executed, is of paramount importance.

          I understand that for someone who wants to boot whatever he likes and try different OSes it is something very bad (and they should be able to buy devices allowing it), but in many environment when this is not allowed anyway, and it's a real danger, being able to enforce what code is allowed to run is a basic needed feature.

          Frankly if in my company I can buy devices users can't tamper with since boot, I'll buy them. Then with your home PC you should be able to do whatever you like, but with the company one you can't, and I'll do my best to ensure you really can't.

          Then this kind of technology can be used to lock-in customers, true, but that should be a matter of anti-trust rules - because the technology itself is both good and bad - can be used to enhance security or to lock you out, otherwise it's like saying the Internet is bad because there is pedophiles.

        2. Roo

          Re: Dinosaur MS

          I would have been happier with something a lot more 'old school'. ie: a special slot (eg: SD card, USB stick) which when populated will boot from it no matter what - overriding whatever the boot order is in the BIOS. You can secure that with a glue gun and be done with it without introducing a relatively *complex* patent encrusted layer that will most likely defy independent audit.

      2. JEDIDIAH
        Devil

        Re: Dinosaur MS

        > SecureBoot is a good technology. Look at the ATMs

        ...which is the perfect reason to force this on consumers buying ARM based tablets.

        1. Anonymous Coward
          Anonymous Coward

          Re: Dinosaur MS

          Sure - if tablets want to be professional devices and not consumer only, they need it as well.

    3. Anonymous Coward
      Anonymous Coward

      Re: Dinosaur MS

      ARM is a new technology in the server space, there have up until very recently been no ARM servers. ARM64 is also very new indeed. ARM SOCs are all very different from each other making OS code for one ARM not always run on another.

      So, as an ARM fan (I've used it since the Archimedies) I'm only going to be specifying ARM servers for a datacentre when an equivalent of the Proliant is running ARM. This is not looking like it's going to be too soon, what with the only dedicated ARM server startup that I know of just folding.

  3. Anonymous Coward
    Anonymous Coward

    ""It's a new technology, but where is it going to be disruptive? A big challenge ARM has is what workloads are you going to run on it," Neil told us."

    Microsoft has such a great track-record of jumping on what will be popular and giving what the customer wants. WP, Surface, Windows 8.x, etc. are all examples of where they missed the mark. There are plenty of others. Maybe Microsoft should be disruptive and while they are looking for a new CEO, find new CxO's and upper management across the board.

    1. Anonymous Coward
      Anonymous Coward

      Zune, Tablet PC, Vista, ME, Win CE, Bob, Clippy, MSN, IE (all pre-10 versions), XBox (all post original), Game for Windows Live, that monopoly abuse case....

  4. Charles Manning

    Translation from MS speak

    "We don't have a Microsoft server product for ARM and it would be a huge deal trying to build one. Anyone running ARM servers is going to run *nix anyway. When this catches on we're screwed. So let's just do what we always do: throw FUD around and hope some sticks."

    If MS product developers were half as good as their FUDders, MS would still be a great company.

    1. Matt Bryant Silver badge
      Facepalm

      Re: Laughie Charlie Re: Translation from MS speak

      Yeah, the only reason all the datacenters in the World haven't kicked their old Wintel kit to the curb is because there are no Linux ports for ARM or commercial ARM server manufacturers, right? Oh, hold on a sec - there are both! They just don't offer as good options for applications, performance or functionality as the current Wintel (or Lintel) offerings. Having quite comfortably survived the onslaught of Linux (haven't the prophets of doom been saying Linux was supposed to have killed Windoze every year for like the last ten years?), M$ are probably quite comfortable in the server market right now. I suspect their main worry regarding ARM is more the client/desktop market.

      1. Spasticus Autisticus
        Facepalm

        MS don't know their arse from their elbow

        A top MS guy once said he couldn't see why anyone would need more that 1Mb (one megaf*ck*ngbyte!) of RAM. Also, MS didn't think this new Internet thing would take off so let's copy AOL & Compuserve and have our own connected service - that went well. Idiots.

        1. heyrick Silver badge

          Re: MS don't know their arse from their elbow

          "A top MS guy once said he couldn't see why anyone would need more that 1Mb (one megaf*ck*ngbyte!) of RAM."

          Times change.

          ARM itself - the original ARM had the PC and PSR combined, which gave a maximum addressing range of 64MB and the MEMCs were each able to address a maximum of 4MB (more would require multiple MEMCs - quite a rare setup). Why? Because 4MB was a luxury back then, and a price to match.

          Shame - I thought LDMFD R13!, {Rx-Ry, PC}^ was a beautiful instruction.

      2. JEDIDIAH
        Mushroom

        Re: Laughie Charlie Translation from MS speak

        Linux killing Wintel? I think you have that backwards. It was NT that was supposed to kill Unix in the server room. People were saying that back in the 90s. So how did that go for you?

        Linux doesn't need world domination. The fact that Linux makes the world safe for other Unixen is an incredibly good thing. Those hacked ATMs are the perfect example why. The world needs ot be kept safe from totally crapulent monopolies.

        1. Matt Bryant Silver badge
          WTF?

          Re: JEDIDUH Re: Laughie Charlie Translation from MS speak

          "....It was NT that was supposed to kill Unix in the server room. People were saying that back in the 90s. So how did that go for you?...." So you slept through the bit where the UNIX market has been in continual decline over the past twenty years due to the inroads of Wintel/Lintel? I use to work in datacenters in the '90s with a strict policy of "no Windows in our UNIX datacenter" - they're all running majority Wintel/Lintel now.

          "....The fact that Linux makes the world safe for other Unixen is an incredibly good thing....." Don't be silly, Linux was one of the prime killers of SPARC-Slowaris. When presented with the choice of having to learn new skills and go with Wintel, most UNIX sysadmins chose the more similar Lintel option if they could. It was self-preservation on their part.

        2. Anonymous Coward
          Anonymous Coward

          Re: Laughie Charlie Translation from MS speak

          Because if you can boot from a different device you can't hack a Linux machine? No OS is safe if you can boot something else... and if your encryption keys are not saved in an antitamper device which can perform validation on-chip.

        3. Anonymous Coward
          Anonymous Coward

          Re: Laughie Charlie Translation from MS speak

          @Jeddiah:

          Windows did kill the strangle hold of big iron unix in the datacentre, from the early 90s onwards practically nobody ran flie and print from UNIX, it just wasn't worth it. Linux then came along and has continued to kill big iron UNIX, very few people run UNIX in new workloads now, if they can possibly avoid it, the bang for buck simply isn't there. There are a large amount of FTSE100 companies who's strategic platforms are virtualised Linux or virtualised Windows, anything else is tactical.

          Also - those hacked ATMs are nothing to do with the software and everything to do with piss-poor hardware design and implementation. You fall into a classic trap if you think they only got hacked because they run Windows, I've seen many a Linux enthusiast not properly secure his or her machine because they believe it to be inherently secure.

          1. Anonymous Coward
            Anonymous Coward

            it is the bugs in Windows

            Actually it is the Windows based ATMs that can be hacked via vulnerabilities in the Windows software not the hardware

            1. heyrick Silver badge

              Re: it is the bugs in Windows

              I beg to differ. Windows is lacking in some forms of security which makes it an easy target in stories like this, however a few years back my previous web host was compromised an a little zero size iframe added to the bottom of every html file. The server? NetBSD and I guess it wasn't kept up to date. Anything can be pwned if there is incentive, to think otherwise is just dumb.

      3. Joe Montana

        Re: Laughie Charlie Translation from MS speak

        There isn't much availability of ARM in the server market, believe me i've been looking...

        I can buy a proper 1U x86 box with a quad core cpu and lights out management for a few hundred, for ARM i have a choice between phones, dev boards and expensive boxes with lots of cpus from the likes of calxeda. Where are the sub £1000 1u ARM rackmount servers?

    2. csumpi
      Paris Hilton

      Re: Translation from MS speak

      Yeah, those idiotic MS drvelopers. All they worry about stupid things like backwards compatibility. Making sure that you can still run your programs after a major OS update. Who would need to run a piece of software from 5 years ago?

      1. Eeep !
        Coat

        Re: Translation from MS speak

        Still use an application called "Unique Filer" from 2000 (well the About info is 2000) and that has run happily on Win98, WinXP, Win7, Win8.1. The installer gets a bit unhappy but the core application still runs no problem.

        That's from the binary of 2000, not an upgrade that is created to support a new OS release, and the Win98 was 32bit and the Win8.1 64bit. And the number of machines it's worked on during that time requires at least 3 hands worth of digits.

        If the developers want me to pay them more money I'd be happy to because it seems to works better than the applications I try that might replace it.

        1. Joe Montana

          Re: Translation from MS speak

          I still use 'xv', which was written in 1994... Because it comes with sourcecode i've been able to compile it on everything from ARM or SPARC based linux to x86-64 based MacOS...

          It does what its supposed to do, and is fast and stable. The only patches i have on it are patches to support newer image formats which didnt exist in 1994.

      2. oldcoder

        Re: Translation from MS speak

        I have no problem with that... And I run Linux rather than Windows.

        And I know of a few Windows applications from 5 years ago that won't run on the current Windows.

        Things for MRI scanners, Xray scanners, ....

  5. Zola
    Thumb Up

    So, no ARM in Microsoft data centres

    Coming from the software giant that has utterly failed at producing a single worthwhile software product that runs on ARM, it's hardly surprising as Windows Server RT would be even more of a turd than regular RT.

    This news is really just an acceptance from Microsoft that ARM in data centres will be running Linux and Microsoft is unlikely to get a look in so why even bother. And nobody will shed a tear.

  6. DainB Bronze badge

    Considering that it'll take years to get ARM chip to be capable of running server loads and much longer for OS and applications support Microsoft is absolutely right - no point to waste money just yet, no one except G and FB really wants or needs ARM servers or has anything to run on them.

    1. Destroy All Monsters Silver badge
      Paris Hilton

      > years to get ARM chip to be capable of running server loads

      Why?

    2. Richard 12 Silver badge
      WTF?

      I can buy such an ARM chip right now.

      Actually, I can buy low to mid-range 32bit ARM servers off-the-shelf right now. Top-end are custom of course.

      In fact, I just did and it's in my hands right now. Unfortunately the hard disks didn't arrive on the same shipment so I can't start it up until tomorrow.

      That said, 64bit ARM is relatively new and there aren't many 64bit ARM SoCs yet.

      For IO bound tasks many of the options are already ARM, and a lot of them go faster and use less power than the equivalent x86 - by going massively-parallel on a scale that is uneconomic in x86.

      You can buy and run a 1024-core ARM server much cheaper than an 1024-core x86 cluster.

      Which made me think - as Microsoft seem to like charging per-core, they've effectively ruled themselves out of the market before it even existed...

      1. Matt Bryant Silver badge
        Pirate

        Re: Richard 12 Re: I can buy such an ARM chip right now.

        "....Which made me think - as Microsoft seem to like charging per-core, they've effectively ruled themselves out of the market before it even existed..." I think you'll find that is changing, a key indicator being "free" Hyper-V.

      2. Dave Lawton

        Re: I can buy such an ARM chip right now.

        Would you like to enlighten everybody on what you've bought please ?

        A hyperlink or two would be nice as well.

  7. SVV

    The smell of fear

    'A big challenge ARM has is what workloads are you going to run on it'

    A bigger challenge for MS is probably the answer "well, I doubt that it'll be that bloated Windows Server 2013, Active Directory stuff you're peddling, maybe some proven Linux on ARM technologies optimised for specificly needed server tasks that will perform equal or better than your high temperature CISC Intel based stuff whilst requiring vastly less power for both running the chips and cooling which will reduce data centre costs significantly".

    Should be food for thought rather than what sounds like an over the top and possibly scared blanket dismissal before the technology has had the chance to succeed or fail in the real world marketplace. Not a good image to be presenting right now for a "top Microsoft bod", indeed the arrogance may well be interpreted as a little desperate and only increase interest in the potential benefits of switching to ARM.

  8. CanadianMacFan

    Bloat

    Of course they aren't going to run Windows on ARM in the data centre. They need all the horsepower in the CPU just to run the OS, GUI, anti-virus, etc before even putting on the application.

    1. Ken Hagan Gold badge

      Re: Bloat

      In the data centre, you don't need the GUI and you don't need th AV (coz you wrote every last line of the code that runs on the server) so funnily enough the data centre is probably the best place for Microsoft to run Windows on ARM.

      Public statements like this no withstanding, I'd be surprised if MS didn't actually have a team making sure that the latest builds of Windows run happily on a selection of ARM-based servers, even if they have to build the servers themselves. Then again, MS has been so badly run these last ten years, maybe nothing would surprise me anymore. (Upvote for the earlier comment suggesting that MS "disruptively" sack their entire top management.)

    2. Anonymous Coward
      Anonymous Coward

      Re: Bloat

      "They need all the horsepower in the CPU just to run the OS, GUI, anti-virus, etc before even putting on the application."

      The very fact you don't know that latest Windows Server releases has a GUI-less mode, many administrative tasks happens remotely using applications utilities, and you don't usually install AV on servers (but maybe some file servers where user have write access to, or mail servers but just to check mails) tells a lot about your knowledge of how a datacenter is setup and run.

      1. Anonymous Coward
        Anonymous Coward

        Re: Bloat

        "the latest Windows Server releases has a GUI-less mode, many administrative tasks happens remotely using applications utilities, and you don't usually install AV on servers (but maybe some file servers where user have write access to, or mail servers but just to check mails) tells a lot about your knowledge of how a datacenter is setup and run."

        Surely it would be equally possible to take that statement (and the arrival of PowerShell a little while ago) as an acknowledgement that a subset of MS products have finally got rid of two decades of unnecessary GUI-dependence?

        1. Destroy All Monsters Silver badge
          Holmes

          Re: Bloat

          "Windows Server releases has a GUI-less mode"

          Leaving admins stranded in front of screens with blinking prompts.

          Seriously, does anyone use it? There is Console-over-IP these days, you know.

          1. Anonymous Coward
            Anonymous Coward

            Re: Bloat

            Leaving incompetent admins stranded. Good admins were already working a lot with scripts and command line tools even before. When you have to administer a large number of servers in a complex environment, you have to automate tasks, and that often means scripts and consoles.

            One of my tests to hire new sysadmins is to have them detect and repairs issues after booting into recovery mode with only the console available...

            1. JEDIDIAH
              Linux

              Re: Bloat

              > Leaving incompetent admins stranded. Good admins were already

              That's great except you're talking about a platform sold on the idea that you don't need competent people managing it.

              1. Anonymous Coward
                Anonymous Coward

                Re: Bloat

                That what some people like you believe, but it was never sold this way. Windows Server is a fairly complex platform to run, with many advanced features Linux wholly lacks, which requires competent and knowledgeable admins to setup and run them.

                Usually it's much.easier for a good Windows admin learn Linux than vice versa, because there is much less to learn.

        2. Anonymous Coward
          Anonymous Coward

          Re: Bloat

          What's wrong in it? Now we should complain that Microsoft did the right thing?

          However, GUIs are not bad per se. But they may not be the right tool when you need to automate tasks or work on a large set of data. You could do a lot via the command line even before 2008 and the PowerShell - if you knew which tools to use and how. The fact that too many Windows users didn't know that is just their fault, not Microsoft's. If one doesn't want to learn continuosly, IT is not the right job for him.

      2. oldcoder

        Re: Bloat

        No wonder they get broken into regularly.

        All of the Windows servers I've had contact with have had anti-vrirus programs running on them. If they didn't, they weren't allowed a network connection.

        And Windows has finally caught up to UNIX of 1990. No need for a GUI on a server... <sarcasm on>big advance.<sarcasm off>

        No government/DoD Windows system is allowed to go without an ant-virus application. Too dangerous. That gets them into severe problems (like the drone control systems... infected.. even though they were a server - so now they use Linux instead).

        1. Anonymous Coward
          Anonymous Coward

          Re: Bloat

          LOL! You don't install AVs on many kind of servers because they may kill performance. For example a db server. Just you don't allow any network connection but the minimum required ones, and viruses can't use any port to propagate. And of course you run security devices in front of them. And you may have a wholly separated network for management. Ah, and you do that for Linux machines as well, because that is the proper way to do it.

          Again, the lack of knowledge about how a modern, secure datacenter is designed and run is very intersting...

          Anyway Windows implemented GUIless server... we're still waiting a Linux GUI that can match the Windows one...

  9. Herby

    Microsoft change CPUs???

    I kinda doubt it. They thought they could do NT on Alpha, but that is long gone.

    They just don't have a good track record of migrating to a "new-different" ecosystem. It isn't in their blood. Apple has done it a couple of times from 68k to Power, and now x86, so they understand the necessary tasks to get the job done.

    Yes, ARM has "low power" going for it, and its acceptance is gaining, but for "really big" data center things, it is a bit behind.

    Of course, someone could do a "back to the future" operating and get a 68k chip all zoomed up and we could end up using that. An interesting concept.

    Reminds me of "metric" in the USA. Very slow and painful process if at all.

    1. PhilBuk

      Re: Microsoft change CPUs???

      Not a relevant comment. NT on Alpha worked fine - we used to run a massive SAP system on that platform. They did get cold feet and pull the OS variant which more or less killed the hardware platform as far as we were concerned. NT was quite hardware agnostic for a while.

      Phil.

      1. Anonymous Coward
        Anonymous Coward

        Re: Microsoft change CPUs???

        We got fingers burnt with NT on Alpha, at the time it looked a great idea and we bought in to it for a big project as the Alpha CPU was, at that time, miles better than the x86 in speed (in particular for single precision floating point).

        Then MS dropped support from NT, and HP took over DEC and canned it in favour of Itanium.

        We also used SOLARIS but got a bit tired of Sun's will we/won't we support it fully on x86 dance, and now we are Oracle hostages are doing our best to migrate off it and free ourselves of Oracle "support".

        So for now it is Linux & x86 but ARM is an option for a lot of our in-house stuff.

    2. pierce
      Boffin

      Re: Microsoft change CPUs???

      NT was originally developed on MIPS. It has run on MIPS, X86, Alpha, Itanium, and even Sparc although that was never released.

      1. Anonymous Coward
        Anonymous Coward

        And Power...

        I actually had the lack of wisdom to buy a PowerPC system running NT... didn't take long before we switched to AIX...

    3. Anonymous Coward
      Anonymous Coward

      Re: Microsoft change CPUs???

      MS has been developing on ARM chips for a long time, Windows Mobile predates both iOS and Android and it run on ARM as well. MS had the compiler and the knowledge about ARM - but there's a lot beyong pure technology to make a platform viable for a given task.

      And MS has a long experience in server operating systems and their applications, while, for example, Apple has none.

      1. JEDIDIAH
        Devil

        Re: Microsoft change CPUs???

        What's to bet on? You take some of your billions and a few interns and you start porting your product to the other CPU. You don't have to bet the farm on it. Just make it a viable option. At least port your own stuff.

        The whole point of Microsoft's core product is to isolate applications from the details of the underlying hardware. That's what an operating system is for.

        If they weren't the incompetents and PHBs that we all know that they are, they would be able to target a different hardware platform at the drop of a hat.

        1. Matt Bryant Silver badge
          FAIL

          Re: JEDIDUH Re: Microsoft change CPUs???

          "....You take some of your billions and a few interns and you start porting your product to the other CPU....." WTF? That would be an incredibly stupid idea as the real coders would have to come back later and clean up the mess made by the inexperienced interns, which makes the whole process MORE expensive and time-consuming in the long-run. When you port anything you want your tob brains on it from the word "go!"

  10. Peter 39

    making a bet

    "When you're a company the size of Microsoft, you don't want to make a chip architecture bet and get it wrong."

    That's true but it would be a much less grievous error than several they've made in the last few years. What's one more going to do? Kick out Steve ?? Oh, wait...

  11. Christian Berger

    The problem is the business model

    If you compare x86 and ARM, you will notice that there was a fairly different business model. Intel sold chips, while ARM sells cores to be integrated onto chips.

    On both you can connect arbitrary hardware, if you are a large company at least. You could buy x86 cores on a chip, and use some TTL or CMOS logic chips to build the rest of your system around it. On ARM that's different. Your "peripheral" hardware is already integrated on chip. So while there can be a diversity, that diversity is controlled by the SoC manufacturers.

    SoC manufacturers compete with each other. One of their biggest worries is that they become second sources, so a customer could simply switch to another manufacturer without any effort. That's why ARM SoCs are very incompatible to each other. For example there are lots of different serial port designs, sometimes even different ones on a single SoC. This ranges from standard 16550 compatible ones to primitive ones which are just a shift register. Everything is different just to make it impossible to switch your SoC withoug mayor porting.

    Now while it may be acceptable for a mobile phone company to port an OS from one SoC to another, this certainly isn't possible in the data centre. There you want to have one install image which installs on thousands of different models.

    So what would it take to make ARM succeed on this market? You need to make a flexible, but well defined interface to the hardware. One solution might be virtualisation, but the far better one would be to first add a way for an operating system to discover its hardware (e.g. a little ROM listing where peripherals are and how they are used). The next way would be to design royalty free IP blocks to put onto SoCs and to promote them. That way we could have an "IBM-PC" of the ARM world.

    1. Stephen Booth

      Re: The problem is the business model

      Except that in a few years Intel is also going to be a SOC vendor. The unavoidable economics of the semiconductor industry drives towards greater integration in devices. Even things that can't be integrated onto the same wafer are better when integrated into the same package. Recently Intel has been buying interconnect IP like there is no tomorrow, and it would be a good business move for intel to produce Intel defined servers in a single package in order to take a higher fraction of the overall product value.

      If I'm right about this then in the x86 space Intel will define pretty much build full servers in single packages; traditional server vendors will only be able to compete on form factors not technical specs. In this case I expect a lot of them to experiment with their own SOCs based on ARM. However these would be future 64 bit arm cores designed as single package servers with all the hardware differences hidden in device drivers produced by the server manufacturer you won't be changing your code any more than you change x86 code when switching from dell to HP.

      Certain segments of the server market (apache mysql java) run on ARM fine. In particular most java based services will port trivially.

      A root and branch replacement of x86 hardware is unrealistic. In the short term I see ARM starting to make some inroads in the areas it already does well but with a lot of vendors taking ARM very seriously in development just in case.

      1. Christian Berger

        Re: The problem is the business model

        Well the point is that every x86 SoC will probably be very much PC-like. So they will boot in a standard way, And you will probably a PCI-bus which gives you a neat list of all the hardware you have.

        So x86 has a great head start for the server area since the IBM-PC came around.

        1. Anonymous Coward
          Anonymous Coward

          Re: The problem is the business model

          Christian,

          You're at risk of mixing up instruction set/chip architecture (x86 vs ARM) with IO bus (eg PCI, available on some x86 and some ARM) and hardware abstraction technologies.

          We're agreed that the instruction sets are different. Linux runs on both, Windows Classic currently doesn't. So what (in principle).

          Hopefully we can agree that both ARM and x86 can in principle support IO buses like PCI (where the device enumeration protocols are well defined and well understood). OK many ARM SoCs don't have PCI and the associated systems don't have PCI sockets, but there's no reason they couldn't, IF it was necessary and appropriate (which usually it isn't - what's PCI for when every routine interface is already on the SoC).

          For example, some members of the decade-old IXP4xx ARM SoC family (eg the IXP435) are SoCs with PCI on the chip. Who made them? Intel, would you believe, based on technology inherited from DEC's StrongARM business, and subsequenrly sold off to Marvell in 2006. Readers might want to have a look at some of Marvell's more recent SoC lines and see what *isn't* available on the SoC (eg you can get USB, PCIe, SATA, LAN, etc, all on the SoC, not to mention direct LVDS drive to LCD screens). Other ARMs are available. Many of them, from many suppliers, suited for many different applications.

          So as you will hopefully now see, it is not *x86* per se that makes this hardware uniformity happen. It is just that the hardware chipset and the firmware (BIOS) traditionally associated with an x86 PC make it look uniform as far as the PC OS is concerned.

          You could in principle have an x86 chip with no PCI(e), for example. It might be of little practical interest, given that no one uses x86 much other than in PC-like systems, but it would be entirely technically feasible. (The handful of x86-based phones may well be examples of PCI-less x86 systems; anyone know?).

          When ARM datacenter vendors want to have a common standard for *bus* enumeration on a chip (ie what bus interfaces etc has the CPU got, where are they, etc), how hard can it be? Once the cpu-to-bus interfaces are identified, the enumeration for devices on each bus is already well understood in Windows, Linux, and presumably many others (QNX? redboot? etc?).

          There's no x86-specific magick that makes this easy on x86 and impossible elsewhere, otherwise the Advanced RISC Computing folks (Alpha, MIPS, PowerPC) wouldn't have got as far as they did till MS decided Windows was for x86 and x86 only. It's just a nice example of the benefits of multi-vendor standardisation. Choice is often good. Monopoly is often not good.

          1. Destroy All Monsters Silver badge

            Re: The problem is the business model

            Hell yeah. License HyperTransport and proceed.

            "Let's go!! Engage, Engage!" (Captain America, Generation Kill)

  12. bazza Silver badge

    Power Consumption

    When you've got massive data centres, power consumption is a major cost. If you've got the money to invest (like Google has) you'd do almost anything to reduce that cost, like develop your own chips.

    Intel ain't going to let Google develop its own x64 chips. ARM will license their design to anyone, and have indeed licensed it to Google. Guess where the future lies there...

    If you're in the business of shipping data from storage to networks (which is pretty much all Google does), you don't need a honking great x64 core to do that. All you need is a tiny core, such as an ARM. And, just like in mobile phones, you'd bolt on co-processors for specific computational loads rather than concentrate on making the core fast enough to do them.

    Microsoft too have an ARM license. If they don't use that to develop and define an ARM server architecture that runs Windows, MS could lose out. Someone else will do an open one that runs Linux instead that might not suit Windows at all.

    1. Anonymous Coward
      Anonymous Coward

      Re: Power Consumption

      It's not just the cost of the power, of course. The waste heat needs to be disposed of (which itself has power costs), there must be space for cooling, backup power systems must be bigger.

      Sadly slower low-power processors don't work well for lower-common-denominator web based applications where the server has to run a large interpreted program for every interaction (PHP, gaah).

      Now that the clock-speed escalator has ground to a halt, we software types need to kick the sloppy mindset of "there will always be more resources" and insist on scalable software architectures as a starting point rather than last-resort. (gets off soapbox).

      1. Richard 12 Silver badge

        Re: Power Consumption

        Definitely! The clock speed escalator basically stopped five years ago or more ~3GHz is it.

        Over the last few years the only way to make your software go faster has been to utilise more cores (be they CPU or GPU, in one or many boxes). If you can't then your software is basically never going to go faster no matter what hardware is thrown at it.

        The only notable difference between my 4-year old Intel desktop and the latest desktop CPU from Intel is that the new one has 2/3 of the power consumption and double the number of virtual cores. The per-core performance is completely unchanged.

        This is the time of the multiprocessor.

  13. Mikel

    Ballmer dissing iPhone and Android comes to mind

    You can already install full Ubuntu or Fedora on a number of ARM platforms, including mobile. There are server ports for most common sevices, and most of the business logic for these things is in scripting languages anyway, which generally don't care what the architecture is.

    ARM on servers has what it needs to take off. So now it's time for Microsoft to announce they fear it, in their own unique way.

  14. T. F. M. Reader

    Wasn't the same question asked about x86 in enterprise circles in the early 90ies?

    1. Anonymous Coward
      Anonymous Coward

      Ain't that the sad and sorry truth. Approaching five decades in this biz and the engineering and marketing permutations look all the same: as if it were Toynbee's cycles on crack.

    2. Ken Hagan Gold badge

      In the *early* 90s, yes, but out-of-order CPUs made ISA irrelevant and Intel had more and better fabs than everyone else, so it was x86 all the way.

      Very little is different now. The jury is still out on whether ARM actually has an ISA advantage over x86 when the chip designers actually aim at the same target market. Even if it does, that advantage is probably only a few month's worth of difference in fab technology, where Intel are always ahead, so it may never amount to anything as far as end-users are concerned.

      The truly disruptive CPU architectures are the ones running on GPUs, and these aren't ready for non-embarrassingly-parallel workloads, so perhaps the MS guy has a point. If/When they are, x86 will rapidly become a boot-time-only phenomenon, like 16-bit real-mode. Then again, the thing that finally makes them ready might look a lot like Haswell, an instruction set that combines a traditional CPU with a (fairly) wide general purpose vector unit.

      1. tojb
        Boffin

        >>The truly disruptive CPU architectures are the ones running on GPUs, and these aren't ready for non-embarrassingly-parallel workloads

        sorry but I use cuda quite a bit, sure comms can be the bottleneck but it is by no means limited to trivial or embarassing parallelisation. Embarassing parallelism is exactly what I'm hoping for from 64-bit ARM: cores that are big enough to take care of something all by themselves, but cheap enough to have a lot of them. If these run, for instance, molecular dynamics half as fast as an intel core but are considerably less than half the price the I want some.

  15. Tom 7

    Microsoft?

    So really they are hoping you wont use ARM and find out they haven't a clue how to get VB script running on it.

    1. oldcoder

      Re: Microsoft?

      :)... Reminds me of a support call that occurred at a supercomputer center running several Cray YMP/C90 systems.. some nut wanted to run VB on the Cray so he could get his answers fast enough... Staff had a bit of trouble getting him to realize not all systems belong to Microsoft...

  16. Anonymous Coward
    Anonymous Coward

    Watch Nvidia

    And their 64 bit ARMs.

    MS would say this as they are nowhere to be seen in ARM.

  17. talk_is_cheap

    I read that more as

    We don't have any form of server OS for ARM, so need to kill the idea of it being a platform until we do, or in the hope that it will just go away.

    ARM needs 64bit, ECC and some general interface standards to become a central server CPU, but to say that it has no place in a data centre due to other issues is a bit missguided. Rather a lot of devices and controls found in a data center are ARM based.

    It reminds me of about 10 odd years ago when an MS rep was telling me that there was no real place for Linux in a data center.

    1. Destroy All Monsters Silver badge

      And openboot please.

  18. DrXym

    Microsoft's problem

    It's not that Microsoft can't build Windows for ARM because they can, but that few 3rd parties can be bothered to follow them. That's why Itanium flopped. It's why RT flopped. It's why NT for MIPS, Alpha and PowerPC flopped. It's great to be able to run Windows, not so great that nothing else runs.

    I expect Linux would fare a lot better on ARM because support is so good already.

  19. grs

    Great for non Microsoft workloads

    For the sane folks who don't run Microsoft servers and do "the internets" properly, they're actually pretty perfect. I'm running mail servers, DNS servers, Git hosting, OpenVPN, my management solution, Django development server and LDAP on ARM machines and they aren't the rack servers that HP are pushing, just smaller boards. In the 9 months since I moved them from X86_64 machines I've saved a crap load of money just off power consumption alone and there hasn't been any performance drop off from it at all from the load they had previously.

    This is all probably just typical FUD from Microsoft because they can't get SQL Server or IIS running on ARM. Remember when they said similar things about Linux?

  20. PaulInSiliValley

    Submit post: Top Microsoft bod: ARM servers right now smell like Intel's (doomed) Itanic

    It would be nice if the SW world would be more friendly to options and not take such a lazy posture to different ISA's

    1. Matt Bryant Silver badge
      Boffin

      Re: PaulInSiliVallay Re: Submit post: Top Microsoft bod: ARM servers right now smell....

      "It would be nice if the SW world would be more friendly to options and not take such a lazy posture to different ISA's" It's not a matter of being lazy. Porting takes up a lot of time, requires a lot of gear, and the kind of people that like to charge a lot for their time. Whilst the FOSS community likes to make out they can run on their own, the truth is major vendors do a lot of the heavy lifting for them (especially true in the case of the Linux ecosystem). So it is not just laziness, it is finding someone with the money to throw at the problem.

  21. Anonymous Coward
    Anonymous Coward

    Top El Reg reader bod: Microsoft's ARM protestations right now smell like Microsoft's (doomed) Windows/RT

  22. P. Lee

    MS are probably right

    ARM isn't the architecture for them at the moment.

    They need more power in a desktop than ARM can provide and server-side is difficult to license too due to differences in compute-power.

    ARM works best with large vertically integrated apps - Google, FriendFace, Samsung and Apple where the customisation costs are easily offset by production volumes and where licensing is an issue which doesn't exist.

    MS' problem is trying to stay out while x86 isn't yet being undercut, but being ready to jump in to prevent the switch to another OS when the hardware performance evens out. Software licensing is a problem. If HP push FLOSS on ARM, where core counts don't matter and HP just provide a large slice & dice compute fabric, both MS and intel may find themselves in the wrong place. Licensing is what keeps people pushing for a little bit more performance, trying to get a better ROI out of their software. The recent rush to appliances and subscriptions may undermine that and make lower-power systems more attractive.

    We may laugh at HP's execution, but real unix does multicore rather well and HP might decide vertical integration is the thing, stick HPUX on ARM and flog it cheap as a loss-leader for HPUX on bigger hardware. Let's face it, x86 isn't doing much for them at the moment, but they probably wouldn't want to give away HPUX on x86. Cutting out Intel leaves a larger slice of the pie for themselves and would probably make them smile after itanic. It wouldn't be an oracle platform, but it might be nice for postgres, or asterisk, samba and lots of other things too. With FLOSS servers you don't always need the highest performance, because you could spin up another instance and load balance.

  23. Anonymous Coward
    Anonymous Coward

    Who cares about M$ anyway

    These are ARM based servers, why would they bother with a crappy M$ OS port with it associated bloat and lack of security when they can have Unix/Linux optimized for their own hardware.

    This argument about not being able to optimize the compilers is forgetting that the people who are making the software are also defining the hardware specification and that allows for code optimization at levels intel and M$ have never seen.

    It must be said Intel have did very well with x86, they managed to get a CPU "design" that was inherently flawed to be the fastest on the planet but they finally hit the barrier implicit in their "design".

    ARM designs on the other hand can only get faster and more powerful and this for a fraction of the cost of x86.

    Who wants x86 emulation when you can recompile and optimize your OS and applications to meet your own designs and requirements and without having to include unnecessary bloat or marketing rubbish

  24. Anonymous Coward
    Anonymous Coward

    I don't know if anyone here has used RT (I've only used RT 8.1) but if you go to the desktop, it adequately demonstrates that MS have already written practically a full version of Windows for ARM. Pretty much everything is there, all the services you'd expect, server/workstation, scheduler, poewrshell, cmd.exe, robocopy, etc. etc. I daresay all that's missing is the server apps like DNS, DHCP, etc. These are the apps where an ARM server can excel, you don't need much processing power or memory.

    I would be very surprised if MS developers don't have a Windows Server running on some hacked together ARM servers, just as a proof of concept, so that they can understand the problems involved should they need to release a full product.

    The question is - which major server manufacturer is going to release a credible, datacentre quality server. It must have PCIe, dual PSUs, dual GigE, lights out, environmental monitoring, rack mount case and preferably come from a good pedegree, such as Proliant. It'll also need good hardware support for drivers from the hardware manufacturers.

    1. Richard Plinston

      > I don't know if anyone here has used RT ... I daresay all that's missing is the server apps like DNS, DHCP,

      You seem to have no idea what a server may be used for. It's not just for storing a few files.

      Where are ARM versions of: IIS, MSSQLServer, Sharepoint, Backup manager, Terminal Services and hundreds of other _server_ applications ? Does RT even have SMB server capability ?

  25. Anonymous Coward
    Anonymous Coward

    Reductio ad absurdam

    It's NOT about the processor but the software platform - as numerous *nix ports to a huge variety of hardware platforms have demonstrated.

    MS should ask themselves about Azure: "It's a new technology, but where is it going to be disruptive? A big challenge it has is what workloads are you going to run on it?"

This topic is closed for new posts.

Other stories you might like