back to article Top Microsoft bod: ARM servers right now smell like Intel's (doomed) Itanic

Microsoft is unlikely to use ARM-compatible processors in a meaningful way in its data centers – unless there is a huge change in the software ecosystem around the non-x86 chips, a top Redmond bod told The Reg. Though Microsoft is closely watching developments in the ARM world, it's unlikely that the Windows giant will be one …

COMMENTS

This topic is closed for new posts.

Page:

  1. Anonymous Coward
    Anonymous Coward

    It's not Itanium, it is a proven architecture. A mature CPU family.

    Itanium was just a crap architecture, as Linus said, they threw away all the good parts of x86.

    1. DainB Bronze badge

      I do remember what a crap Sun Niagara CPUs were first years after they were released, half of the time they could not even finish Solaris install without hanging. So no, ARM is not a proven architecture anywhere outside your mobile, certainly not in servers and it won't be there for years.

      1. ThomH

        @DainB

        On the contrary, ARM is a proven architecture in a way Itanium definitely wasn't. Itanium bet on very long instruction words (VLIW): each instruction was a compound built of the instruction for every individual unit in the CPU.

        Famously the Pentium has separate integer and floating point units. If it had used VLIW then each instruction would have specified both what the integer unit should do and what the floating point unit should do to satisfy that operation.

        So the bet was that compilers are better placed to decide scheduling between multiple units in advance than a traditional superscalar scheduler is as the program runs. The job of organising superscalar dispatches has become quite complicated as CPUs have gained extra execution pipes so why not do it in advance?

        The idea of VLIW had been popular in academia but had never succeeded in the market. So: the Itanium architecture was unproven in a way that ARM most definitely isn't.

        In the real world the solution to the problem VLIW tackles has become to cap single cores at a certain amount of complexity and just put multiple cores onto each die.

        1. DainB Bronze badge

          Re: @DainB

          "The idea of VLIW had been popular in academia but had never succeeded in the market. So: the Itanium architecture was unproven in a way that ARM most definitely isn't."

          You're completely wrong here, internals of Itanuim has nothing to do with the fact that they decided develop CPU that did not have any OS or application support and was not compatible with de-facto commodity server standard which x86 was turning into. HP tried to position them to be in enterprise grade (read expensive) segment and compete with IBM and Sun which had ecosystem, OS and applications and all three failed to recognize that this segment of market is rapidly shrinking. AMD being second mouse decided do the same but keep backward compatibility with 32-bit x86 and this was exactly why we now have CPUs that can run both 64 and 32 bit apps seamlessly..

          If you think that ARM can bite piece of Intel server market I can easily predict that as an answer to that Intel can easily package gazillion Atom cores into one CPU and market will have to choose between equally power efficient CPU with no software whatsoever and CPU with all software you can ever imagine. I even know who will win.

          Also correct me if I'm wrong but ARM for now has only one advantage over Intel and this is power consumption (see above about just as efficient Atoms, but lets imagine they do not exist). While it's a critical parameter for mobile, in servers it is one people consider among the last.

          1. Richard Plinston

            Re: @DainB

            > Also correct me if I'm wrong but ARM for now has only one advantage over Intel and this is power consumption

            You are wrong.

            Intel builds chips to suit its own marketing needs. ARM chips can be designed and built by companies to suit _their_ needs. For example chips can be designed to suit a specific workload on servers.

            ARM also has an advantage on price. A few years ago, before significant numbers of smartphones were being sold, ARM said that a billion ARM chips were being made every year and the cheapest were being sold in quantity for 50 cents each. Now that quantity has more than doubled.. The Raspberry Pi can be purchased for $25 and that is a complete system.

            1. DainB Bronze badge

              Re: @DainB

              "ARM chips can be designed and built by companies to suit _their_ needs. For example chips can be designed to suit a specific workload on servers."

              You know difference between closed architecture of mobile and open architecture of server, don't you ?

              "ARM also has an advantage on price."

              Over what ? Xeon ? How about compare something that can be compared - ARM and Atom. Price difference will be measured in dollars if it all.

              "The Raspberry Pi can be purchased for $25 and that is a complete system."

              For that money it does not even come with case and power supply, what complete system you're talking about ?

              1. oldcoder

                Re: @DainB

                Doesn't really need a case...

                It comes with everything in a PC - USB, video, memory, CPU, disk..

                It is designed to be an educational tool, and it turns out, can also be used for laptops, entertainment systems, home security systems ...

                And even small servers.

              2. Vic

                Re: @DainB

                > Price difference will be measured in dollars if it all

                On a 50c part?

                Yes, I fear you might be right...

                Vic.

            2. itzman

              Re: @DainB

              I think this is in the end the key USP, that ARM can be integrated with other chipsets on die, rather than on-board.

              In the case of servers that means integrated with communications to other chips, machines and to other devices like storage.

              The uniqueness of such a solution doesn't percolate up to the application - just as far as device drivers in the OS and the core OS multitasker.

              So 'generic ARM/linux' apps should run on such hardware once the actual device drivers and OS port has been done to that platform.

              And servers tend to run less apps and more heavyweight apps, where the cost of porting/testing is small compared with the overall cashflow of what is being run on them.

          2. Anonymous Coward
            Anonymous Coward

            Re: @DainB

            "they decided develop CPU that did not have any OS or application support and was not compatible with de-facto commodity server standard which x86 was turning into" (and the rest).

            Very poor analysis. Go to the back of the class and re-review the history.

            The only reason that Itanium sold anything worthwhile AT ALL was that it DID have OS support, largely from HP (and, after a very short while, ONLY from HP - why would a sensible Linux or Windows customer not choose x86-64 over IA64).

            If exjsting customers of HPUX and VMS and Tandem NSK (all of which ended up being owned by HP) hadn't been forced to buy Itanium simply to preserve their existing software investment on new boxes, nobody would have cared two hoots about Itanium. Most people didn't care anyway. RIP IA64. RIP HP?

            1. Anonymous Coward
              Anonymous Coward

              Re: @DainB

              The poor analysis is yours. Itanium couldn't run but application recompiled for it. Everything else was running in emulation mode. Itanium was available well before x86-64 was available, because Intel wanted 64 bit to be Itanium, not x86. But only an handful of applications were available for Itanium. Sure, if you wanted to run Oracle you could, but not much more on that box. Many custom applications should have been ported to Itanium, but it was an investment just a few did.

              Because of this, and because hw was expensive, it was appealing only for high-end setups with dedicated large box, not for those using x86 servers for a plethora of different tasks, and maybe moving older servers to less important tasks. So most just stick with x86, even if it was 32 bit only (but then it was often adequate but for large workloads), and when AMD was able to force Intel to adopt its x86-64, well, Itanium was doomed.

              In very many ways Itanium followed the fate of the Alpha chip which was one of its main ancestors.

              1. Anonymous Coward
                Anonymous Coward

                Re: @DainB

                "Itanium followed the fate of the Alpha chip which was one of its main ancestors."

                Alpha initially had a compelling architectural differentiation (primarily, 64bit address space, with others less obvious too).

                Alpha initially had a compelling performance advantage (especially for FP-intensive applications).

                Alpha eventually had multiple sources of chips, multiple designers of chips (Samsung were a design and manufacture licencee) and multiple system builders independent of the chip vendor.

                Alpha had multiple operating systems.

                Alpha was in principle as appropriate for a desktop (or a VME/PICMG card) as it was for a high end server.

                Alpha had a technical future; it was canned for political reasons.

                How many of those did Itanium have? (ARM's got all the relevant ones)

                Alpha had an idiot running the company in charge of chip (and most system) development, an idiot who mistakenly chose to trust Intel ("IA64 will be technically far superior to Alpha") and Microsoft ("yes we'll do NT/Alpha properly") rather than trust his own company's people.

                When Itanium eventually arrived it had no compelling architectural difference, no performance advantage, no 2nd source, no ecosystem of chip and system builders, no realistic chance of being a desktop chip (too big and expensive). It had initial backing from HP's VLIW people and Intel's VLIW people. That worked well for them didn't it, once the Itanium cash pile inevitably ran out.

                "Many custom applications should have been ported to Itanium, but it was an investment just a few did."

                Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware. What did IA64 bring most of them, even if the port was trivial? (can't speak for HPUX).

                IBM still manage to make chips of their own in systems of their own. Not HP though. Even HP's Project Moonshot seems to have gone quiet for now. HP = clueless.

                "when AMD was able to force Intel to adopt its x86-64, well, Itanium was doomed."

                Certainly was. Who could ever have seen that one coming?

                1. Alan Brown Silver badge

                  Re: @DainB

                  Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware. What did IA64 bring most of them, even if the port was trivial? (can't speak for HPUX).

                  There was a positive discincentive. We decided not to update our Alpha VMS systems to Itanium because it was so fricking expensive. The alpha boxes are still going but their disks are now dying and once they're gone, they'll be gone forever. So much for "VMS until 2036"

                  1. Matt Bryant Silver badge
                    Boffin

                    Re: Alan Brown Re: @DainB

                    "Where was the motivation? Many VMS and Tru64 and NSK customers were happy with their existing software and hardware...." The big incentive for us moving our Alpha-VMS GS1280 Oracle platforms to Itanium-VMS was the ability to use new interconnects like PCI-e and therefore faster SAN and networking. Our production bottlenecks weren't in processing power but in disk and LAN, though the development of the Itaniums meant they also out-performed the Alphas, especially as the later generations upped the core count. I guess from hp's point of view it saved on development costs by having one platform for VMS, hp-ux and NonStop (and RHEL and Windows IA64 versions for a while), and then saved even more by having it in the same blade chassis with the same software tools as their x64 blades.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: Alan Brown @DainB

                      "The big incentive for us moving off our Alpha-VMS GS1280 ... the ability to use new interconnects like PCI-e and therefore faster SAN and networking."

                      Right. So if VMS had run on AMD64 on Proliant rather than solely Itanium, that would have suited you just fine? Or if Alpha systems had had faster IO (how hard could that be)? But you had no needs that actually architecturally required IA64?

                      "Our production bottlenecks weren't in processing power but in disk and LAN"

                      Right. So if VMS had run on AMD64 on Proliant rather than on Itanium (or even on an Alpha with updated IO), that would have suited you just fine?

                      "though the development of the Itaniums meant they also out-performed the Alphas, especially as the later generations upped the core count."

                      Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years. Gee whoda thunk it.

                      If the same amount of money as was eventually wasted on IA64 had been invested either on Alpha (chips and systems) or on porting VMS (and Tru64? and NonStop) to selected AMD64 Proliants, where would HP be today? HP themselves would probably still be in the sh1t, so let's think about where ex DEC and ex Tandem customers would be :)

                      As it happens, HP have had to finally admit that NonStop will end up on AMD64 [1].

                      "saved even more by having it in the same blade chassis with the same software tools as their x64 blades."

                      Saved earlier too, But no, IA64 was going to be the industry standard 64bit architecture.

                      It had to be true, Intel said it. Oh wait. As any long term observer knows, that's Intel, the x86 company.

                      So, they wait till IA64 inevitably collapses. Then they do what should have been done a long time previously, effectively admit that IA64 was nothing more than an extended field test paid for by customers. And move on.

                      HP = clueless.

                      [1] NonStop on x86

                      Here: http://www.theregister.co.uk/2013/11/04/hp_to_port_nonstop_to_x86/

                      HP press release: http://www8.hp.com/us/en/hp-news/press-release.html?id=1519347#.UtK81PtNwSk

                      1. Matt Bryant Silver badge
                        Facepalm

                        Re: AC Re: Alan Brown @DainB

                        "....So if VMS had run on AMD64 on Proliant rather than solely Itanium, that would have suited you just fine?...." Yes, and it would probably have meant cheaper hardware, though the software support and application costs would still remain the same. Scaling x64 socket-wise may have been an issue for hp (or Compaq), whereas Itanium was already 16-socket-ready with Merced and soon 32-socket-capable. I pushed the original AMD64 chips for Linux hard when they came out but, TBH, AMD seem to have lost their way a bit recently and Intel's Xeon currently walks all over them for the majority of uses.

                        "....Or if Alpha systems had had faster IO (how hard could that be)?...." With the declining Alpha market it would have meant the cost of Alpha development and product could only increase. That shrinking market meant the motherboard development for Alpha to include new memory and interface technologies was simply going to be giving a smaller return. Hp Integrity was not only faster but also cheaper (though nowhere near as cheap as x64-based servers), and the wider market meant it spread the development cost for hp. SImple economies of scale.

                        "....Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years....." Not true. Itanium2 absolutely walked all over even EV7 in every test we did. Even from an early stage there were performance cases where hp-ux on Merced was faster than VMS on Alpha. Alpha's early big commercial advantage was in clustering, but Serviceguard (and Oracle RAC) largely removed any advantage long before Alpha was killed. Sorry if you are one of those fanatical Alpha dinosaurs that missed the news but the hp-ux market was bigger than the VMS and Tru-64 markets combined, and since it would have been far easier to port VMS to Itanium than hp-ux to Alpha (even if Alpha hadn't been killed by Compaq before the merger), hp simply looked at the market figures and went with hp-ux which meant Itanium.

                        ".....HP have had to finally admit that NonStop will end up on AMD64...." There is very little point in hp trying to put either VMS or future hp-ux up against commercial Linux on x64. I see it is unlikely any of the UNIX vendors (even IBM) will try and push their commercial UNIX variants on x64, not after they have seen Sun choke trying the same. But NonStop still offers an unique capability that commercial Linux cannot match, and does so into a cash-rich niche that will allow the resultant deals to carry a higher margin than standard x64 offerings, so hp is probably happier to let that go up against Linux in direct competition. You do understand what NonStop does, right?

                        "....HP = clueless...." LOL, they seem to be alot more alive than old MIPS competitors, or Digital or Sun. Try not to be so bitter. You may have noticed that hp are developing ARM kit too....? Oh, sorry, you were probably too busy wallowing in memories of glories past.

                        /SP&L

                        1. Roo

                          Re: AC Alan Brown @DainB

                          ""....Eventually after spending who knows how much time and money, the IA64s outperformed a set of systems that had had no significant development funding for years....." Not true. Itanium2 absolutely walked all over even EV7 in every test we did."

                          I can believe that, but it also proves the man's point that the EV7 had suffered from a lack of development. EV7 taped out in 2001 (it was 2 years late under HPaq's tender loving care, the core was the same as the 1996 vintage EV6), HP shipped boxes in 2003 (fabbed at 180nm).

                          McKinley (Itanic 2.0) taped out in 2002 @ 180nm, so I would expect it stomp all over a 1999 design with less than half the cache.

                          Madison (Itanic 2.1) shipped in 2003 @ 130nm. If Intel can't beat a 1996 vintage core fabbed on a bulk process at this point they may as well have quit the business.

                          I think people forget just how heavily delayed EV6 and it's successors were (legal action, corporate take-overs). Given the difficulties and constraints faced by the Alpha's engineers I think that it is a miracle that they remained competitive for as long as they did.

                2. JEDIDIAH
                  Linux

                  Re: @DainB

                  The question isn't one of portability. If you build your platform tools on a particular architecture, then porting apps is a relatively trivial affair. This is how different Unix platforms feed off of each other. They are all very similar and a relatively easy port versus something entirely else.

                  The real problem is one of comittment. Microsoft had an Alpha port. Sun had an early x86 port. Neither of those was really supported as well as they should have been so they kind of languished and deteiorated.

                  That's why Windows ARM tablets are so tricky. There's no software for them. The 30 years of Win/DOS legacy apps aren't there.

                  3rd parties have to be willing to port stuff. Microsoft saying "make it so" won't change anything. Although there isn't any good reason that Microsoft's (or Oracle's) flagship apps couldn't be ported to whatever-on-ARM.

                3. This post has been deleted by its author

                4. Destroy All Monsters Silver badge
                  Headmaster

                  Murder most foul: A game of ... processor architectures

                  AC says: Alpha had an idiot running the company in charge of chip (and most system) development, an idiot who mistakenly chose to trust Intel ("IA64 will be technically far superior to Alpha") and Microsoft ("yes we'll do NT/Alpha properly") rather than trust his own company's people.

                  Let's be precise here. This is a rich and deep history, filled with tragedy:

                  0) 1989: HP investigates VLIW instruction sets, called EPIC

                  1) February 25, 1992: Alpha architecture mentioned for the first time ; Alpha is on the front page of CACM february 1993 (test system pulled 1kW as I remember);

                  2) Serial Big Mistakes by DEC under Kenneth Olsen in marketing Alpha (and anything else) ;

                  3) DEC is increasingly going pear-shaped, sheds divisions ;

                  4) HP partners with Intel to build the IA-64 as it judges that proprietary microprocessors are not the future; goal is to deliver "Merced" by 1998 ;

                  5) May 18, 1998: After a hard patent battle with Intel, DEC agrees to support IA-64 architecture and Intel agrees to manufacture DEC's Alpha processors. Sour Alpha engineers leave for AMD and Sun. Intel has acquired StrongARM from DEC but those engineeres also leave ;

                  6) The rump of DEC gets acquired by Compaq, which has only modicum of interest in Alpha. They are into growing their PC market.

                  7) Compaq's PC shit deflates bigtime, leaving unsold inventory rotting at the docks, Eckhard Pfeiffer ousted. CEO Michael Capellas is uninterested in Alpha, stops development of NT on Alpha on August 23, 1999. Samsung and IBM sign Alpha manufacturing deals with Compaq.

                  8) March 2000: Easy-money fuelled dotcom bubble bursts, Greenspan unrepentant, gives a f*ck.

                  9) June 25, 2001: Compaq announces complete shift of their server solutions from Alpha to IA-64 architecture by 2004. The Alpha Microprocessor Division is disbanded and moved to Intel. Samsung and IBM stop producing Alphas. Andrew Orlowski had one or two good ones on Don Capellas' Alphacide back in the day.

                  10) June 2001: "Itanium" (i.e. Merced) released by Intel ; where are the compilers ?

                  11) September 3, 2001: Hewlett-Packard announces its intention to acquire Compaq (because HP's Carly Fiorina couldn't acquire PricewaterhouseCoopers for $18 billion USD, so this must be a displacement activity). A long battle to convince the HP shareholders that this is actually a good idea begins.

                  12) September 11, 2001: Bush is minding his own business cutting shrubberies in Crawford, Texas, when suddenly a great call for Glorious Presidency is heard across the soon-to-called-thus "Homeland". God bless!

                  13) October 21, 2001, API (the workstation manufacturer) throws in the towel on Alpha systems and transfers all rights to support Alpha systems (including warranty service) to Microway,

                  14) May 2002: Merger of HP and Compaq is given the go-ahead. Don Capellas becomes president of the post-merger HP, but soon moves on to the burning crater of MCI Worldcom (USD 11 billion of fraud, a record back then) on November 12. 2002, to lead its acquisition by Verizon.

                  15) In August 2004, the last Alpha processor was announced.

                  THE END!

              2. Anonymous Coward
                Anonymous Coward

                re: Itanium failure

                IIRC Itanium platforms were very expensive, so limited market.

                That's what doomed it to failure IMO.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: re: Itanium failure

                  Cooling and power consumption too, plus crap slow x86 emulation.

            2. Joe Montana

              Re: @DainB

              IA64 had pretty good Linux support, and if your workload was entirely based on open source software then there was no technical reason you couldn't run it on IA64... If you depended on any closed source software then IA64 was typically not an option, as most closed source vendors would typically not port their stuff to IA64.

              The problem boiled down to price, all of the IA64 hardware that was available cost more and consumed more power than comparably performing x86 and x86-64. I would have seriously considered IA64 for my workloads had it been price competitive with x86.

              For ARM this doesn't need to be a problem, if they can make servers which are competitively priced then they should sell just fine.

              1. Anonymous Coward
                Anonymous Coward

                Re: @DainB

                "if your workload was entirely based on open source software then there was no technical reason you couldn't run it on IA64"

                AND your workload and organisation was the kind of workload and organisation where in-house support was considered appropriate. So by that logic it might fit some scientific stuff, but maybe less so with mainstream commercial stuff, Orrible, etc, where the PHBs are more comfortable having someone they can call, someone with an SLA, etc (even if it's a useless SLA).

                "For ARM this doesn't need to be a problem, if they can make servers which are competitively priced then they should sell just fine."

                Absolutely. High volume server users (the likely interested ones in the early days) likely aren't going to be at all fazed by having in-house support for their business-critical software. And as time goes by, we get the same kind of momentum building up around ARM Linux servers as Microsoft used to have with their network of Certified Microsoft Dependent Systems Engineers and such.

                Interesting times.

        2. Gordan

          @ThomH

          "So the bet was that compilers are better placed to decide scheduling between multiple units in advance than a traditional superscalar scheduler is as the program runs. The job of organising superscalar dispatches has become quite complicated as CPUs have gained extra execution pipes so why not do it in advance?"

          The problem is that most compilers are really crap, and even when they aren't most developers aren't educated enough in how to leverage them to tage advantage of them.

          See here for an exampler of just how much difference a decent compiler makes:

          http://www.altechnative.net/2010/12/31/choice-of-compilers-part-1-x86/

          The similar problem was faced back in the day of Pentium 4, which had a reputation of being a very poor performer, when in fact, with a decent compiler it outperformed the Pentium 3 by a considerabe margin.

          Or to put it differently - it wasn't a crap processor, it was the software developers that were too incompetent to use it properly.

          1. Lennart Sorensen

            Re: @ThomH

            The P4 was a bad design. It was obviously designed to aim for the biggest clock speed number because that's what intel marketing wanted. The fact it was lousy at running existing x86 code that had been optimized following intel's own recommendations didn't matter to intel. As long as consumers were buying the machine with the biggest GHz number, intel was happy with the P4. Only when they ran into leakage issues and overheating problems and discovered they wouldn't be able to scale to 10GHz like they had planned did they throw the design away and start over using the Pentium-M/Pentium3 design and create the Core 2 by improving on the older design. Only the new instructions from the P4 were carried over, netburst was dead and well deserved. The Opteron/Athlon64 destroyed the P4 in performance on typical code at a lot lower clock speed and power consumption. intel has made a number of stupid blunders over the years, but they do eventually admit when things don't work and recover quite well by changing course and they have the resources to pull it off. x86 will be around from intel long after the itanium is gone.

      2. Dan 55 Silver badge
        Coffee/keyboard

        @DainB

        Leaving the Itanic distraction aside (when MS can't do something they bad mouth it) and going back to "ARM is not a proven architecture anywhere outside your mobile"...

        http://www.theregister.co.uk/2012/06/01/acorn_archimedes_is_25_years_old/

        "it felt like the fastest computer I have ever used, by a considerable margin"

      3. Anonymous Coward
        Anonymous Coward

        Acorn Archimedes?

        If there's any problem with ARM it is the fact that every SOC is different.

        1. Richard Plinston

          > If there's any problem with ARM it is the fact that every SOC is different.

          And when you look at x86 _systems_, which include GPU, RAM controllers, network, USB, etc controllers you find that every _system_ is different.

          That is why there are hundreds of device drivers.

        2. Charles Manning

          "If there's any problem with ARM it is the fact that every SOC is different."

          If there is a problem with Intels it is the fact that SOCs don't exist.

          The whole reason all those ARM SOCs do exist is because ARM is easy to design around and is worth designing around.

          The reason Intel SOCs don't exist is that Intel keeps all its designs private and plays "Father knows best".

          Market forces determine the rest...

      4. Voland's right hand Silver badge

        It is "outside mobile"

        Quote "outside your mobile" - it is actually used more outside the mobile. It is nearly everywhere around us. Arm MCs are now so cheap that nobody bothers with the "smaller, more embdded and more efficient" varieties as they are more expensive. Your hard disk is ARM, your car ECU if it is made in the last 5 years is probably arm too (it used to be PPC). Your fridge MC, your washing machine, your dishwasher controller - you name it. The bloody thing is everywhere. The only place where it is still not dominant is home routers and wifi APs - that is the sole MIPS holdout.

        It is a proven architecture provided that you are happy to move your workload around as _SOURCE_. That is what Microsoft does not like here - you cannot move a binary workload efficiently from Arm to Arm. You either need an optimised runtime per hardware version (as in Android) or you need to recompile it. While the basic instruction may be the same (or may be not - Razzie being an example of a violation - its fpu aspects), all offloads and all accelerations are very hardware specific. Just look at the various /proc files and the irq list on an arm device on linux and weep (I do it regularly when my Debianized chromebook pisses me off and I need to debug something on it).

        As far as arm 32 vs 64 the difference is not that staggering - it is an evolutionary step - same as amd64 vs i386. Considering that 64 is already going into high volume devices I would not expect that to be an issue with regards to arm acceptance and overall architecture stability anyway.

      5. Lennart Sorensen

        The Sun Niagara is a Sparc, not ARM. Sparc itself is fine, the Niagara design, not so much.

        All the ARM chips so far have been perfectly sane. Itanium was not a sane design for a general purpose CPU. it was assuming compilers would become able to do compile time scheduling of parallel instructions, and that didn't happen. I vaguely recall seeing a paper a few years ago that actually proved it can't be done, so what intel hope for is actually impossible if I recall correctly. And if I recall incorrectly, it is still a very hard problem that has not been solved. So as it stands, the itanium is a terrible CPU design and rightly deserved to die. It is an enourmous shame that it caused MIPS to give up designing high end chips, made the Alpha go away, and certainly hurt sparc (powerpc seems to be doing OK). I don't personally miss PA-RISC.

    2. justincormack

      Most of these people are talking about ARM64 which is actually a brand new Risc architecture with little connection to ARM as we no it, it is even a separate Linux port, not part of the ARM tree, although Linus did want them to merge it.

      1. Lennart Sorensen

        ARM64 (well aarch64) is very much a 64bit extention to the existing ARM 32bit design a lot like AMD extended x86 to 64bit. All 64bit ARM chips are perfectly able to run existing 32bit ARM code and a 64bit ARM linux system will also run 32bit arm applications with no changes or recompile needed. This is not a new thing. Sparc went from 32 to 64bit, PowerPC did it, Mips did it, x86 did it, PA-Risc did it, and now ARM is doing it. Nothing complicated about extending an architecture from 32 to 64bit while maintaining backwards compatibility. Only a few architectures were 64bit from the start (Itanium and Alpha that I can think of).

    3. Anonymous Coward
      Anonymous Coward

      > It's not Itanium, it is a proven architecture. A mature CPU family.

      PowerPC was, too. Apple had to shift to Intel CPUs for its PCs...

      The problem is not if ARM is a good CPU or not (Itanium was too), is if it can gain enough support and then market share in the data center.

      x86 is not the best CPU design around - someone remembers when it was common to say "RISC CPUs are far better then Intel CISC ones! They are the future!", and we're still on x86 - but it gained so much support it's the most widely used architecture around.

      Can ARM succeed where MIPS, Alpha, PowerPC, Itanium and others failed? It has nothing to do with "superior technology" only.

      1. Hi Wreck
        Thumb Up

        Re: LDS

        Intel machines translate most instructions into a more RISC-like internal architecture. Note that the L1 instruction cache stores pre-decoded instructions. This is a hint. It is truly amazing that a register starved architecture can perform so well. Kudos to the Intel engineers.

        1. Alan Brown Silver badge

          Re: LDS

          "Intel machines translate most instructions into a more RISC-like internal architecture. "

          Which does make you wonder what could be achieved if the internal architecture was exposed directly.

        2. Anonymous Coward
          Anonymous Coward

          Re: LDS

          Yes, and you have additional circuitry to perform that translation. Compiler have still to optimize code taking into account the standard registers and available instructions - and even the CPU can work with shadow registers to perform some "magic", compilers can't because they don't know how many are available and how the translation works. And you can optimize for a given processor model only if you are 100% sure your application will run on it only, and you can recompile it when it is upgraded.

          Sure, Intel has been able to fend off competition improving the CPU power anyway, but it led to big, power hungry, hot processors.

        3. Anonymous Coward
          Anonymous Coward

          Re: LDS

          "It is truly amazing that a register starved architecture can perform so well."

          It is indeed.

          AMD64 and its Intel clone has twice as many registers as legacy x86. A decent x86-64 Linux will use them. Don't know about Windows.

          The trendy new(ish) ARM 64bit architecture also has twice as many registers as previous ARMs (which was already more than legacy x86 had).

          1. Anonymous Coward
            Anonymous Coward

            Re: LDS

            Maybe you don't know that 'RISC architectures always had lots of registers, more than the sixteen x86-64 has. Even 1990's RISC processors had at least 32 registers.

            Any Intel 64 bit compilers - Windos, Linux, Apple, BSD, etc. - uses all the available *documented* registers, what you can't rely on are undocumented features - even if the processor has more registers

            on silicon and uses them for internal optimizations, still no compiler can access them. And a compiler can perform a far deeper analysis of code than a CPU could do while executing it.

            One of the reason is the instruction format. You need a way to encode into an instruction what register(s) it refers to. Intel.x86 uses an instruction encoding where some bits in one of the bytes encoding the instruction identify the registers. Due to compatibility needs also, there's not much freedom in adding many registers, even if the hw design allows for it. RISC processor uses a different (and simpler, this Reduced) instruction set and encoding. The drawback it may need more instructions to execute a given task.

            But today processors are so complex the number of general purpose registers (there are many specialized others) is not the only performance metric.

        4. Adrian 4

          Re: LDS

          Makes you wonder how good a job they could do if they weren't carrying legacy that goes back to the 8080. X86 should have died long ago.

        5. Charles Manning

          Re: LDS

          It is truely amazing that this on-the-fly translation works so well.

          It also uses tons on transistors which have to be toggled at great speed. That's one of the reasons Intel sucks so much power.

        6. Roo

          Re: LDS

          "This is a hint. It is truly amazing that a register starved architecture can perform so well. Kudos to the Intel engineers."

          There are some very bright people working over at Intel, but I think you should really be giving the credit for the micro-op approach to AMD's Opteron. The register starvation problem is tackled by register renaming - which you can thank IBM's System/360 for... Interestingly some of the folks working on the Opteron were refugees from the Alpha, I guess they had their revenge on Itanic in the end. Cornering Intel into making the Itanic obsolete with Xeons is quite a trick. ;)

      2. Alan Brown Silver badge

        someone remembers when it was common to say "RISC CPUs are far better then Intel CISC ones! They are the future!",

        x86 were and are amongst the least efficient chips out there, in terms of clocks per instruction and in terms of power consumption.

        However they were _far_ cheaper than the competition and as such became ubiqutous.. In the end, that's what counted more than anything else.

      3. Lennart Sorensen

        Apple changed to x86 because no one was making powerpc chips that fit their needs. IBM was making high end server chips which used too much power for a desktop, and freescale was making embedded chips which were too slow for what the desktop needed. Nothing wrong with PowerPC itself, just the server and embedded market was vastly more interesting (and a vastly larger market) than tiny little Apple's measly desktop market. It is the same reason Apple moved from m68k to PowerPC in the first place. m68k wasn't getting faster any more.

      4. Lennart Sorensen

        Also MIPS never tried and SGI bought into the Itanium idea and killed development. It still does well in embedded markets where almost all wireless routers are MIPS based although a few are ARM. Alpha (owned by compaq owned by HP then sold to intel) was killed off, and had failed because digital had priced it out of the market to protect the VAX market that eventually was killed of by competition from everyone else instead. PowerPC hasn't failed, it does great it the markets that use it (lots of engine computers in cars are powerpc, as is lots of other embedded systems, and IBM has rather nice servers). Itanium failed because it was slow and stupid.

    4. Anonymous Coward
      Anonymous Coward

      There are good parts to x86? :-O Having done assembler in Z80, 6502, 68K; DEC, MIPS and x86, x86 was a bit of a nightmare, closely followed by Z80.

      The 68K assembler was an architecture designed for programmers, x86 wasn't. RISC is beautifully simple, x86 isn't...

    5. Dave Lawton
      Holmes

      X86 architecture

      If there are good bits to X86 architecture, where are they hiding please ?

  2. Anonymous Coward
    Anonymous Coward

    Dinosaur MS

    "It's a new technology"

    Whut? Has this dimwit been living under a rock for about the last 25 years?

    "If ARM chips do come to the data center, then it's likely Microsoft will fiddle with and perhaps embrace them,"

    Lock 'em down, restrict and ruin 'em. A la Windows RT and SecureBoot.

    The one thing MS cannot stand is the concept of competition.

    1. pierce
      Paris Hilton

      Re: Dinosaur MS

      the basic ARM instruction set has been around a long time but there is a lot more to a server than just the instruction set, the total infrastructure for a full blown ARM server is quite new. Windows needs a whole new HAL to support its memory management, DMA, IO bus enumeration, bootstrap sequence.

Page:

This topic is closed for new posts.

Other stories you might like