back to article Intel to finally scatter remaining ashes of Itanium to the wind in 2021: Final call for doomed server CPU line

Intel has announced the official, pinky-swear, cross-my-heart-and-hope-to-die end to its Itanium line, notifying system makers that production of the server processors will end by mid-2021. In a notice [PDF] sent out to vendors earlier this week, Intel said that the Itanium 9700 series would take its final order on January 30 …

  1. Chris King

    Finally !

    "Sink the Itanic" becomes an appropriate comment !

    1. Anonymous Coward
      Anonymous Coward

      Re: Finally !

      Intel and RAS is like tits on a bull. Still if my production systems have to go onto Intel at least its still HP-UX. PA-RISC is so rock solid still have some systems haven't moved over to Itanic. Trust me Itanic will be running in production for many years to come. Linux on x86_64 can piss right off. Still only HP could go from PA-RISC to the tire fire that is Itanic. Really hashed that up they did. Hopefully HP-UX at least makes it most of the way to my retirement. If have to do 24/7 support I really want to do it on a proper UNIX that never crashes.

  2. Dread Pirate Bob

    To sum up:

    Intel 86's IA-64, which AMD-64 86'd in '04.

  3. Phil Endecott

    To me, in the 90s, it looked like Intel knew x86 was crap and needed replacing, but they couldn’t bring themselves to replace it with a RISC-ish architecture like everyone else was gravitating to. (That would be admitting defeat.). So they chose this VLIW design, which relied on some extrapolation of where compiler technology might go. But that didn’t happen.

    I’m surprised it lasted as far as shipping products, let alone for 20 years of shipping products. There mist have been some very profitable locked-in customers somewhere. I don’t think that would happen today.

    1. Dread Pirate Bob

      It’s either on of, or both

      1. Tax write offs

      2. Costly penalties for breaking a very poorly negotiated 20 year fabrication 15 year development contract with a very large, and very substantial key vendor.

    2. Paul Crawford Silver badge

      There was a thing for VLWI style processors around that time - TI produced some DSP with fantastic headlines speeds for the time, but you would be lucky to get 20% of it in many cases if you could not take full advantage of the approach (schedule instructions to use the 2 * 4 blocks of compute engines in parallel, not conditional instructions that would dump the instruction queue, etc). They seem to have faded away as well?

      Sadder is that HP took over DEC and canned their Alpha processor line as clearly the Itanium was going to win in 64-bit computer space, eh? Just goes to show how poorly HP's judgement has been more or less since their founders were gone.

      1. rcxb Silver badge

        HP took over DEC and canned their Alpha processor line

        Compaq discontinued the Alpha in favor of Itanium before the HP merger, and sold their tech and engineers to Intel. And it wasn't just them, SGI gave up their MIPS CPUs in favor or Itanium as well. IBM, NEC, Fujitsu, Sun, Dell, and more were all making Itanium systems, even if they didn't bet big on it, as others did.

    3. Dave K

      Itanium was a bit like Brexit. Sold as being a perfect solution before reality emerges and bites it in the ass (whether you support Brexit or not, you can't deny that the original fantasy claims regarding it have failed to materialise). The idea was that RISC effectively hits a wall at 1 instruction per clock cycle in an ideal world, EPIC/VLIW potentially allows multiple instructions per clock cycle, so should scale to higher performance without needing additional clock-speed. Hence it was sold as being the future of chip design.

      Of course, this was before reality kicked in and people realised that shifting optimisation work to the compiler rarely ever works unless code uses very predictable branches and is carefully optimised. Intel should have known this from their failed i860 project, but they didn't learn the lessons. The result was that Itanium was a performance flop.

      In all honesty, if Itanium had provided massive performance boosts over x86, it may have stood a chance. But in reality, performance was disappointing, and why bother breaking backwards compatibility and spending lots of effort porting and optimising code if there's no major benefit to doing so?

      1. Anonymous Coward
        Anonymous Coward

        "Itanium was a bit like Brexit" - really? I thought Brexit was well-planned, well-implemented and what everyone wants.

      2. Anonymous Coward
        Anonymous Coward

        "Itanium was a bit like Brexit. Sold as being a perfect solution before reality emerges and bites it in the ass"

        I'm going to disagree with this analogy because:

        a) it doesn't fit the reality of Itanium (artifical market segmentation leading to failure)

        b) we have enough Brexit commentary without labelling everything that fails as "like Brexit"...

        For point (a), Itanium was meant as the 64-bit product line to match SPARC and POWER. MIPS was effectively dead and DEC/HP were merging and in the process killing of PA/RISC and Alpha development, throwing there lot in behind Itanium. Intel wanted to keep seperate 32-bit and 64-bit product lines because of the premium 64-bit commanded.

        Unfortunately, when Merced performance sucked, forcing a lot of DEC/HP customers to look elsewhere for their upgrades. Including Linux on x86 for smaller systems.

        Then came Itanium 2 which wasn't terrible at the time, but promised compatibility via emulation that the clock speeds/architecture just couldn't support at acceptable performance levels. And then AMD gave us 64-bit support at x86 prices and the increasing RAM capacities meant that the writing was on the wall for any of the competing midrange 64-bit processors. By the time Microsoft were pushing 64-bit with Windows 7/Server 2008, legacy systems were the only thing keeping Itanium going.

        If you want a political comparison to Itanium, I'd go with the Scottish independence referendum where Scotland rejected independence in order to remain in the EU (well...amongst other things) only for Brexit to happen.

        1. Dave K

          You're right about the initial segmentation, but Intel's plan was eventually to replace x86 with Itanium. The chip was intended to be Intel's only 64bit architecture going forwards (Intel had no plans to implement 64bit x86), and was boasted as having x86 compatibility built in as well. It sounded perfect - compatibility with existing applications, and a true break from x86 for the 64bit migration, Initially it'd be for servers and high-end workstations, but the plan was for Itanium to continue taking market share away from x86 which would be limited to 32bit.

          The three things that screwed that plan were Itanium's woeful x86 performance (and heavy delays), dodgy performance in general (at least to start with), but most of all AMD adding 64bit capability to x86, proving that you could have full speed x86 and x64 capabilities in one CPU. This essentially destroyed Intel's plan to ditch x86 by keeping it to 32bit only, and forced them to adopt AMD's 64bit extensions. This essentially killed Itanium in all but the high-end of the market.

          It's also worth noting that MIPS was only essentially dead due to Itanium. MIPS was still very competitive back in 1997/1998 and SGI had the "Beast" project under development for a true successor to the R10000. With Itanium due in 1999 (and full of hype at this point), SGI canned the "Beast" project and decided to migrate. Of course, with Itanium suffering heavy delays until 2001 (2002 if you discount Merced), SGI were stuck with an architecture for which they had already cancelled future development. Hence it's not surprising that MIPS floundered over the next 4 years, and this helped to accelerate SGI's downfall.

          1. Anonymous Coward
            Anonymous Coward

            There’s a lot of Intel marketing in there, most of which could never be delivered on.

            For SGI, they were a niche player in the enterprise market even in the late 90s. It peaked in 1995 and by 1998 graphics cards for x86 systems were competitive with SGI and exceeded SGIs capabilities within two years at a significantly reduced price. While the hardware was nice, it wasn’t perfect. As for MIPS, all the CPU designers were looking into the future and seeing new designs taking billions to get the next generation into the market. With SGI in decline, the R10000 never looked likely.

            As for Itanium emulation, PC clock speeds killed this - you were never going to be able to emulate a 2Ghz Pentium with a 700MHz Merced using similar caches/memory buses as VLIW was (and still is - see machine learning and GPGPU applications) inefficient when dealing with anything that requires the processor to wait on memory access as you are waiting 100s if cycles between useful work. Larger, multi-level caches and branch prediction would have helped increase processor utilisation and reduce average memory latency but they required process shrinks to implement, pushing it 5+ years into the future.

            As a further example of the effect AMD had on the enterprise space, look at the success of Solaris on x86 - it practically saved Suns hardware line between 2004 and 2008 as shown in this Sun presentation - https://www.slideshare.net/mobile/xKinAnx/sun-fire-x2100-and-x2200-sales-presentation

            If AMD hadn’t released 64-bit extensions leaving Linux stuck on 32-bit hardware and Intel x86/MS followed similar paths with 64-bit adoption (ie not included in Server 2003, included in Vista and Server 2003 64-bit in 2005 but with limited application support and then offering full support with Win7 and Server 2008), Itanium probably would have been as commercially successful as SPARC/POWER were. If Linux hadn’t provided a UNIX alternative, it’s likely that progress towards x86 would have been delayed.

            Pricing for enterprise UNIX hardware in the 90s was around 5x the x86 equivalent for near identical pieces (SCSI hard drives and memory being obvious examples) - it left a very large gap for disruptive changes in the market and meant the UNIX vendors had to adapt or die. Most died...

            1. Dave K

              It's typical of Intel's approach in the late 90s / early 2000s. Develop the technology that you want to develop, then try and force it upon your customers - even if a different approach may be preferred by customers. They tried this with Rambus (forcing it as the exclusive high-end memory option for the P4), and with Itanium (refusing to implement 64bit support for x86 to try and force people over to Itanium).

              On each occasion though, they've been undone by their competitors.

              SiS and VIA provided DDR chipsets for the P4, plus the high cost of Rambus prevented the budget PC brigade jumping onto the P4 bandwagon - until Intel eventually relented and provided a DDR chipset as soon as they were legally allowed to do so by the terms of their dodgy deal with Rambus. Then AMD released x86-64 to great success - plus Intel's belated response of developing their own (incompatible) x86-64 instruction set fell flat when MS refused to produce yet another version of Windows to support it.

              1. Anonymous Coward
                Anonymous Coward

                I'm not 100% sure of this, but I'm not sure the RDRAM fiasco was of Intel's making. Rambus began legal action against SDRAM in 2000 and it looked like they may have been entitled to stop future SDRAM-based products until the case was heard. Intel produced RDRAM chipsets due to a lack of legal choice. I suspect SiS/VIA's timing was down to the Rambus's legal position looking less certain, but after Intel had already committed to RDRAM for the initial P4 chipsets to match it's P4 release dates.

                WiMAX would be another example of Intel trying to move the market in a particular direction - it was slated as being the only wireless connectivity you would need with Intel controlling the specs and pushing it as a standard option in laptop's. Unfortunately for the telco's/other comms providers that went with WiMAX, it wasn't the all singing, all dancing wireless protocol they were seeking...

                Optane maybe a more current example - all of the demonstrations I have seen for enterprise level Optane suggest that it is a very niche product, with many of it's performance claims based on deployment scenarios that aren't used in real life. i.e. use Optane to speed up your large databases rather than spending the same money on either RAM or flash when the cost of Optane >> RAM/flash.

        2. Mage Silver badge
          Boffin

          re: By the time Microsoft were pushing 64-bit with Windows 7

          First 64 bit windows NT was ironically Alpha, NT 4.0

          The IA-64 was supported by an XP version in October 25, 2001 (NT 5.1) and updated in 2003 when Server 2003 released. (NT5.2?).

          Shortest lived Windows NT version? Discontinued January 2005, after Hewlett-Packard, the last distributor of Itanium-based workstations, stopped selling Itanium systems marketed as 'workstations'. Support ended July 2005!

          HP was really the prime mover for IA-64, not so much Intel. Didn't HP even start the design? Shame HP bought Compaq & DEC. Who would have been better?

          The XP professional for AMD x86-64 was released on April 25, 2005. Extended Support for the embedded version of XP ended Jan. 8, 2019. Up till then it was possible to get some security fixes on regular XP workstation by changing the Registry to report it as embedded (which included signs, ATMs and POS, sometimes using a regular PC motherboard / system).

          XP 32 bit could access less RAM than NT4.0 Enterprise as the extended addressing for more than 4G was disabled. Typically an XP 32 app could only access 2.5G, encouraging gamers to update to the less compatible 64bit XP.

          Was the STUPID idea of using Program Files (x86) for 32 bit on 64 bit because of Alpha-64, Alpha-32 or Itanium? Why didn't Program Files name change only for > 32 bit?

          1. Anonymous Coward
            Anonymous Coward

            Re: re: By the time Microsoft were pushing 64-bit with Windows 7

            OK - true, Windows NT/XP/2003/Vista all had 64-bit options. I'll ignore the non-x86 options as my point was that to get 64-bit, you needed to get a non-x86 CPU at great cost.

            For 64-bit Windows support, yes, it was a thing, but using 64-bit Windows XP (and to a lesser extent Vista) was limited in terms of application support (i.e. Photoshop got 64-bit support in 2008, Excel in Office 2010 etc). HL2 and Far Cry appear to be the only games with 64-bit support before Win7 was released. Exchange supported 64-bit in 2007 releases and SQL server in 2005 releases.

            For the 4GB split on 32-bit XP, it defaulted to 2GB user space and 2GB OS in any 32-bit process. You could use the /3GB switch to change this to 3GB user and 1GB OS, but it needed application support to use the additional user space and there were caveats. In short, if you were running out of memory with a 32-bit app at 2GB you were probably screwed.

            I have no idea who come up with the Program Files/Program Files (x86) split.

        3. /dev/null

          "DEC/HP were merging and in the process killing of PA/RISC and Alpha development"

          Um, no, Compaq (remember them?) decided to can Alpha in favour of Itanium before they merged with HP.

          HP started the VLIW research project intended to produce a follow-on to PA-RISC in 1989. That later became Itanium, after they partnered up with Intel.

      3. Anonymous Coward
        Anonymous Coward

        Yet we do GPU VLIW-like optimization these days with some level of success. I've no love for Itanium, just interesting how long some "new technology" takes before becoming it becomes core engineering.

        1. Paul Crawford Silver badge

          There is a big difference though - GPUs are generally used for massively parallel tasks anyway. The organisation of multiple compute blocks and the impact of instruction queues dumping are far less on that sort of special accelerator, as compared to general purpose code used by OS / word processor / web server / etc..

          1. Anonymous Coward
            Anonymous Coward

            RE:VLIW

            "There is a big difference though - GPUs are generally used for massively parallel tasks anyway."

            The assumption behind VLIW at the time was that CISC/RISC were approaching clock speed limitations at one instruction per clock cycle and an alternative approach was required. History and CPU development took a different path.

      4. Anonymous Coward
        Anonymous Coward

        Oh yeah?

        Itanium was a bit like the EU. Sold as being a perfect solution before reality emerges and bites it in the ass (whether you support the EU or not, you can't deny that the original fantasy claims regarding it have failed to materialise).

      5. Michael Wojcik Silver badge

        superscalar

        RISC effectively hits a wall at 1 instruction per clock cycle in an ideal world

        No, this is just wrong. The original single-chip superscalar CPUs were all RISC: the 88100, the i960, the Am29000. The Pentium, Nx586, etc used superscalar RISC cores. IBM's Cheetah project was specifically to add superscalar capability to the 801, the original RISC architecture; it eventually led to the RIOS / POWER design.

        Superscalar processors can do > 1 instruction per cycle. That's what "superscalar" means.

        So the "wall" you refer to was broken sometime between 1982 and 1984 (Cheetah).

    4. david 12 Silver badge

      Itanium /was/ a RISC-ish architecture like the universities were advocating. Reduced Instruction Set? Check. Compile-time optimisation? Check. In-order execution? Check. Parallelization? Check.

    5. bombastic bob Silver badge
      Devil

      x86 arch isn't all THAT 'crap' and here's why: name a single RISC architecture chip that currently outperforms x86 _AND_ runs on desktop computers! *crickets*

      CISC vs RISC is an old debate. RISC applications tend to take up more memory, as do 64-bit ones [in general]. But in the case of RISC, you're dealing with instruction fetching and pipelining too. Sometimes having one instruction instead of 2 or 3 or 4 just makes things GO FASTER. It also makes it significantly more complex on the silicon, eat more power, etc.. So RISC has significant advantages in phones and other battery operated things.

      And what made AMD64 so brilliant was its obvious 'evolutionary' rather than 'revolutionary' design, including a backwards compatibility, almost like the way 16-bit went to 32-bit for x86.

      Why Intel didn't do this first amazes me...

      and now, /me quotes the 'Dead Parrot' sketch in its entirety, substituting 'Itanium' for 'parrot'. Thanks, El Reg, for that perfect analogy in the photo for the article link on the main page.

      "Pining for the fjords" - heh

      1. Michael Wojcik Silver badge

        name a single RISC architecture chip that currently outperforms x86 _AND_ runs on desktop computers!

        x86 is a RISC architecture. It just decodes an archaic CISC instruction set in front.

    6. Jim 59

      As I recall, HP bought Apollo, makers of high end workstations powered by Motorolla CPUs in 1989 (same chip familay as the Amiga?), and continued to make engineering workstations, but with their very own PA RISC cpus. Perhaps they should have gone with the Apollo/Motorolla tech instead.

      PA-RISC was replaced by the HP/Intel Itanium which they continued to put into blade systems. But in later years I think HP-UX was all that could run on these. Perhaps fortunately for HP, their chassis systems also supports Intel blades.

    7. Michael Strorm Silver badge

      @Phil Endecott; "There mist have been some very profitable locked-in customers somewhere."

      From Wikipedia:-

      "During the 2012 Hewlett-Packard Co. v. Oracle Corp. support lawsuit, court documents unsealed by a Santa Clara County Court judge revealed that in 2008, Hewlett-Packard had paid Intel around $440 million to keep producing and updating Itanium microprocessors from 2009 to 2014. In 2010, the two companies signed another $250 million deal, which obliged Intel to continue making Itanium CPUs for HP's machines until 2017. Under the terms of the agreements, HP has to pay for chips it gets from Intel, while Intel launches Tukwila, Poulson, Kittson, and Kittson+ chips in a bid to gradually boost performance of the platform."

  4. luis river

    Itanium dead

    Don´t cry for me Argentina or Don´t cry for me Hewlett Packard ?

  5. mevets

    Many difficult questions lie ahead.

    Will there be a farewell party for iTanic? With Pizza and balloons? Will one large do, or two mediums?

    1. Glen 1

      Re: Many difficult questions lie ahead.

      2 mediums. That way you can have both meat and veggie options

    2. Anonymous Coward
      Anonymous Coward

      Re: Many difficult questions lie ahead.

      One large.

      It will turn out the large only supports a medium level load and you will get the second one free..

      * Note: will the second unit maybe free initially, both will incur support/licencing costs in subsequent years. Have you considered our x86-based party solutions?

  6. Michael Jarve

    I remember...

    I remember attending an Intel channel conference back in the early oghts, when AMD’s hammer was mostly just slides and rumors on Tom’s Hardware Guide. Intel pushed Itanium hard, and even gave out processors and boards to dink around with. However, they lost the plot with the weak, afterthought x86 compatibility offered. Even in the late ‘90s, many were put off by proprietary archetectures unless it had some must-have feature. Itanium came to market 10 years too late, and Intel, ironically, had a lot to do with this. They made the Pentium 3/Xeon “good enough” that for your average small, medium, and gigantisaur customers, Itanium was just gilding the hood ornament. Since it’s official release, it’s been an interesting footnote with specialized use cases. It didn’t make Exchange or Apache run faster, nor did it run consumer applications run faster or more efficiently. Unlike MIPS, which could scale down, Itanium was almost, from the beginning, an architecture that only scaled up; unlike ARM, or MIPS, it could not find solace in embeded systems. Eventually, SPARC will suffer the same fate (if it hasn’t already- I don’t remember the last time I saw anything regarding SPARC); PowerPC has some life left in it only because for the past 60 years no one had ever been fired for buying IBM, and its ubiquitous use in embeded systems (I happen to know that the ECU in my Volvo uses a derivative of the Power PC 603).

    The failure of Itanium is many fold: it was positioned as a replacement for x86 during the heyday of x86 clones; it did not provide an improvement in performance now (circa 2001-2002); it was designed as a high-end competitor to MIPS, SPARC, Power PC, etc, but was marketed as the the successor to x86; the fundamental problem it was envisioned to solve (effective super-scalar execution and parallel threading) had already been solved by their own hardware.

    The VLWI theory behind HP/Intel’s EPIC architecture was effectively commandeered by nVidia and DAMMIT and put to better, more efficient use.

    If HP had released their EPIC processor in the early 90’s and partnered with Intel to manufacture it, things might have been different for the novel architecture. At the time, any one of a dozen or more chips could go on to rule the world- IBM and Apple thought the way forward was PowerPC; Silicon Graphics and Nintendo and Sony thought the future lay in MIPS. Sun somehow convinced the world SPARC was worthwhile. ARM showed that CPUs could be cheap as chips.

    Despite marketing, despite citing use cases, Intel could not do the same. The final nail came with (pardon the pun) AMDs Hammer, which showed that x86 still had life, a future, and more importantly compatibility. The last thing a CTO wants to hear when asking “will it work tomorrow?” is “perhaps, mostly, but slower.”

    1. mevets

      Re: I remember...

      Intel going to Sun to get a Solaris port for Itanium. Intel ended up running away in a huff, after Solaris had booted on it, because Sun wasn’t going to give up on SPARC. Weird people.

  7. A Non e-mouse Silver badge

    Design Flaw

    Someone over at Ars pointed out one of the big flaws in the Itanic design: Intel assumed that the compiler would be able to better parralllelize instructions that the CPU could at runtime. The problem with that, is that the compiler will never be able to tell what tier of memory (cache, main, swap) a memory access might be.

    1. Mike 16

      Re: Design Flaw

      So, the hardware variant of the "Sufficiently Smart Compiler"

      https://prog21.dadgum.com/40.html

      Wherein the point is made that the compiler has to not only be sufficiently smart (to take crap-code and re-write it), but also _perfect_, lest a formerly unnoticed inability to deal with a particular corner case result in a trivial change to the source dropping performance in the toilet.

    2. Christian Berger

      It actually was more of a hope back then

      You see back then standard CPUs were easily fast enough to do all the "complicated" things where you have lots of branching and parsing and stuff. Speed was mostly needed at "simple" things like 3D graphics or video. Those things are fairly deterministic and probably could be done very quickly with VLIW architectures.

      What Intel underestimated was that there's lots of legacy code out there which will never be touched and stay exactly the same binary, so that x86 emulation is way more important than they thought. Then that "complicated" code got slower and slower. Today we are at a point where a modern, but only mid-range machine actually barely can keep up with a decent typist, because the editor was implemented via a browser. That's just madness and what people underestimated back then.

      1. MJB7

        Re: It actually was more of a hope back then

        Certainly legacy code was an issue, but there's an awful lot of _new_ code being written with lots of branching, and people wanted it faster than their old code. Itanic depended on the compilers being able to optimize this code effectively - and *at the time* people were saying "this is beyond state-of-the-art for compilers". (I think it still is.)

  8. Hans 1
    Happy

    AMD put IA64 out of its misery.

    1. phuzz Silver badge

      It did, but it's taken fifteen years for the bloody thing to finally keel over.

      1. Anonymous Coward
        Anonymous Coward

        Re:Itaniums death notice

        Itanium died around 2012 when it became clear Intel weren’t developing Itanium any further. There was a die shrink following that but the writing was on the wall for Itanium when the die shrink gave them nothing except additional clock speed, and companies were shutting down development.

        This announcement means anyone who hoped for just one more generation is out of luck.

        The question is now how much longer does SPARC have in Larrys loving arms now he’s won?

  9. HmmmYes

    Ahh yes.

    Not bothered by it bar having to argue #def macros.

    IA32

    Ok lets use IA64 for 64 bit x86.

    Oh no, we might heed to port to itanium.

    Laughter......

  10. Wade Burchette

    Just imagine

    Just imagine if Intel had succeed with Itanium. What would our desktop/laptop CPU be? Consider how little the Intel Core processor improved before Ryzen was released and then how fast they have improved since then. If Itanium was our future, a future with no competition, then the computer we have today would very much be inferior and more expensive.

    We need competition. In my dream future, AMD overtakes Intel in marketshare which forces Intel to work harder to make a better product. Then Intel re-takes the marketshare lead, which forces AMD to work harder to make a better product. And the cycle repeats to infinity. In my dream future, AMD overtakes NVidia in marketshare, which forces NVidia to work harder to make a better product. Then NVidia re-takes the marketshare lead from AMD, which forces AMD to work harder to make a better product. And that cycle also repeats to infinity.

    1. Anonymous Coward
      Anonymous Coward

      Re: Just imagine

      I would suggest there would be little difference if Itanium had succeeded. The impact of Itanium succeeding would likely have been slower x86 server development, most of which are not available to desktop users. If anything, Itanium may have had a chance if development hadn't stopped 10+ years ago and they got some of the features found in Core2's/Xeon's. Intel effectively stopped development at 32nm and only got a die shrink to 22nm in 2015 when Xeon's should have hit 10nm...

      Itanium's failure was around execution in the first generation (Merced) combined with AMD's 64-bit support providing a platform for enterprise Linux option's to support enterprise applications. The combination of these and the success with Core2 meant that Intel was already scaling back development of Itanium by the third generation as the 64-bit enterprise market was moving to x86.

      For competition, neither Intel or nVidia have shown much interest in it, so it is largely a dream. And I would have loved AMD to provide a genuine alternative generation after generation rather than being consistently inconsistent. Maybe Epyc 2 will be the turning point?

    2. Karlis 1

      Re: Just imagine

      I think you have gotten your causes and effects wrong. Intel was doing perfectly fine in improving both the desktop and mobile CPUs line well before Ryzen. Their focus was on power consumption, not raw performance and I'm actually happy with that.

      Since Ryzen "shook things up" ... I struggle to name a single meaningful impact on Intels' product line. Yes, they reacted with i9s. Which are pretty pointless for most purposes. They were doing just fine in iterating upwards from core2duo line while competition from AMD was nonexistent.

      I appreciate your idealistic view that equal competitors will lead to best competition, but I'd like to counter that with an observation that one needs to have pretty deep pockets to go for significant generation leaps. Three equally fcked competitors racing to the bottom won't give you that. Itanium was a flop, but one that Intel could afford. Via couldn't, nether could cyrix. Let's not mention transmeta.

      Semi-monopoly that delivers the goods is fine.

    3. katrinab Silver badge

      Re: Just imagine

      I think ARM could actually be a bigger threat than Intel. In terms of units shiped, they are already way ahead, but a Snapdragon is now competitive with Intel on software compiled natively for the platform, and Apple’s chips are a bit faster. We coukd see Apple switch to ARM for MacBook Airs, then roll it out to thecrest of the range. Windows computers could follow, there’s already a few of them available.

      Also, if you are designing an ARM chip that doesn’t neen to fit in a phone, it coukd be a lot faster.

    4. Mage Silver badge
      Coat

      Re: Just imagine

      Or if Windows 95 had not existed or been a failure.

      Pentium Pro ran NT brilliantly. Ran win9x badly because design assumed no run time switching 32 bit -> 16bit. NT used WoW and NTVDM to run 16bit windows on 32bit x86 cpu like NT did with Alpha. Too much Win9x & MS Office and apps were 16 bit. Win9x Killed Pentium Pro. The expensive RAM needed didn't help either.

      1. Roo
        Windows

        Re: Just imagine

        "Win9x Killed Pentium Pro" - hardly, I found PPros to be competitive with the 233MHz P5s under '95...

        Price killed the PPros, they were MCMs consisting of a core chip and a (big) cache chip which ran at clock clock frequency. They cost a lot more to make and a little bit more to buy. For the applications I was working on the extra price was well worth it, PPros flew on the image processing code I worked on - and it was very easy to optimize for them in comparison to their peers. :)

  11. Anonymous Coward
    Anonymous Coward

    Business with Intel during the Itanium Era

    Sooooo, ya wanna do business with us? You'd better port all your software over to Itanium then hadn't ya. No pressure.

  12. Snowy Silver badge

    That is quite some lead time

    [quote]In a notice [PDF] sent out to vendors earlier this week, Intel said that the Itanium 9700 series would take its final order on January 30 of 2020 with the final shipment date to take place on July 29, 2021.[/quote]

    18 months wait for delivery for a processor which will be by then over 4 years old, given this and how it failed does make me wonder who would still want to buy them.

    1. Karlis 1

      Re: That is quite some lead time

      Real wold systems that takes 3-5 years to plan to power down and corresponding maintenance contracts.

      Some of that silicone powers things that matters.

  13. Tridac

    From usenet recently, comp.arch newsgroup, a different and slightly more cynical take on Itanium, or Itanic, as it was known:-

    In article <d80eb54f-8835-41fa-84ae-0393b61e1dac@googlegroups.com>,

    Quadibloc <jsavard@ecn.ab.ca> wrote:

    >Even if the performance problems of the Itanium architecture could be fixed, so

    >as to make it something almost rivaling the Mill, Intel right now is rather too

    >busy looking over its shoulder at AMD to worry about that.

    Fix IA-64? There's nothing to fix.

    IA-64 was a wild success that achieved it's top 2 goals before Merced

    hit the market:

    1) It got HP access to Intel's fabs, making PA-RISC CPUs much more

    competitive for 2 years.

    2) It got several mid-level managers in HP Servers promoted to outside

    positions.

    IA-64 started in HP Labs as PA-WideWord (called PA-WW). This was basically

    the final IA-64, with the added fun benefit of fixed data cache latency

    with no interlocks. (Don't laugh).

    Once interlocks were added, the result was pretty much IA-64: rotating

    registers, the register stack, speculation with Not-A-Thing bits, predication,

    fixed bundles, etc. All the details weren't finalized, yet.

    And PA-WW wasn't going anywhere inside HP.

    So, midlevel managers at HP knew about PA-WW, and with evil genius

    they sold this to Intel as IA64: a solution to Intel's 64-bit problem, and

    as a solution for HP to have access to Intel's fab's to make PA-RISC

    CPUs run faster. HP had success moving to new architectures with emulation support, so they knew they could move PA-RISC and x86 to IA-64, with a penalty of course, but it would work. And they had the detailed HP Labs data showing how fast PA-WW was going to be.

    Once Intel bit, IA-64 became a train that could not be stopped inside HP.

    And Intel's internal politics worked similarly: this was a way for a

    down-and-out design group in Intel to show up the x86 guys.

    Technically, inside HP, IA-64 was viewed as just the next thing to do:

    Not much better, but not worse. And it had the "potential" to be much better.

    And some folks liked the idea of working on something other people would

    use (HP servers made good money, but were not popular in universities).

    So there was no strong pushback. And there definitely were interesting

    technical challenges that sucked folks in: VLIW, speculation, etc.

    And IA-64 had the mantra "the compiler can handle this", which lots of

    people suspected was not true, but which is hard to prove. IA-64 is the

    proof the world needed that in-order was dead (performancewise).

    And, within 3 years (and before Merced shipped), all the mid-level HP managers

    involved had been promoted to positions outside HP. It's a skill to

    get out before your chickens come home to roost. And PA-RISC CPUs

    hit new MHz targets, doubling in speed in 2 years, on Intel fabs.

    So IA-64 was clearly successful.

    Oh, you mean as a computer to actually buy? Oh, that's different.

    And PA-RISC CPUs got even faster on an IBM SOI fab, doubling in speed in

    another 2 years, making access to the Intel fabs unnecessary.

    IA-64 was foisted onto the world so some managers could be

    big-wigs for a while and get promotions.

    1. Anonymous Coward
      Anonymous Coward

      The double spacing and line breaks got me reading that as a poem. It works fairly well. For extra kudos, do a rhyming version.

      Itanic does rhyme with manic, panic, Satanic and Hispanic. Three out of four being appropriate isn't bad.

  14. JRStern

    Those lazy, hazy, crazy days of Itanium

    Before everyone realized that it's all about cache and memory bandwidth so the core architecture barely matters. All those zippy CPU architectures reaching back to when Burroughs was the most interesting computer company in the world, coming to naught against the crufty x86.

    Thanks for these other comments btw, I never realized Itanium was Intel's excuse not to push on 64-bit x86, always wondered how AMD had tripped across that victory.

  15. Maelstorm Bronze badge
    FAIL

    Do you know the real reason Itanium failed?

    No? Then I will tell you: Cost.

    To move to a completely new platform you not only need to replace the hardware, but you also have to replace the software. Software is key. When enterprise software costs a fortune, and then you have to chuck it to move to a new platform. Many businesses at the time did not see a business need to move from 32-bit to 64-bit at that time, and then include the added cost of new software. So many businesses stayed put. Then AMD came out with the 64-bit (x86 compatible) Opteron chip. So business that needed the 64-bit platform moved to that and ditched Intel.

    Now why would a business want to spend more money to upgrade both hardware and software when they can just upgrade the hardware and the OS for much cheaper? Yeah, I can't think of a reason either.

    1. Anonymous Coward
      Anonymous Coward

      Re: Do you know the real reason Itanium failed?

      This is correct up to a point.

      A lot of software on Digital/HP platforms continued to only run on those platforms or Itanium up until at least 2012 (hopefully they have an x86 migration path now...). Part of this was driven by the need for large memory support for database servers, but a lot of vendors didn't release x86 versions of their applications either. Whether this was due to technical requirements (i.e. the HA features offered by POWER/SPARC/Itanium), hardware vendor incentives or internal reasons is unclear.

      In my experience, what Itanium changed was the customers mindset. x86 went from being a toy before around 2005 to a platform where enterprises were looking at it as a serious alternative to proprietary UNIX platforms. As development and smaller workloads were moved to x86, the results demonstrated that not only was x86 capable of handling the performance requirements, but it did so significantly cheaper. Once the customers wanted it, the vendors started taking the requests seriously, but this was a relatively long and drawn out process.

      1. Anonymous Coward
        Anonymous Coward

        Re: Do you know the real reason Itanium failed?

        "A lot of software on Digital/HP platforms continued to only run on those platforms or Itanium up until at least 2012 (hopefully they have an x86 migration path now...). "

        It is interesting that, in 2009, I was laid off from HP working on a linux port of one of their cash cow products (Serviceguard). When the recession hit, HP killed off ALL non cash cow (Itanium) products that were not profitable in our section, only to restart those projects 2 and 3 years later. They screwed up. They doubled down on Itanium at the exact time that the rest of the world was giving Itanium the finger, and fired a whole pile of Linux developers. You start to see how this lines up with this story? THE RECESSION occurred right in the middle of the Itanium push, making it LESS likely that anybody was ever going to recompile for Itanium, or move to a more expensive , locked in HP-UX environment. They killed off their distant future OPPORTUNITY to move ahead in order to save their immediate cash cows, and those cash cows' days have declined significantly and are now numbered. And lets not forget that MARK HURD oversaw this. He killed the future to make quarterly numbers.

  16. FIA Silver badge

    Did itanium fail?

    My memory may not be what it was, but IIRC in the mid/late 90s x86 in the server space was a bit of a joke. (Okay for running that small business, but you wouldn't use it for serious servers*).

    Then along came Itanium, which in short order killed off pretty much all other server class CPUs (poor Alpha), now most of the (server) world runs on Intel CPUs.

    Sure, it's not Itanium, but it's not SPARC, or Alpha or PA-RISC either. POWER seems to be taking longer to die, but as others have pointed out it's possibly because it's also done the ARM thing and headed downwards too.

    It faied, but it was enough to scare away the compettion, so maybe it won after all?

    * ie, eye gougingly pricey

    1. defiler

      Re: Did itanium fail?

      That's pretty-much how I read it over the years too.

      HP consolidated all of the PA-RISC / Alpha / whatever else into their in-house VLIW project, and then Intel climbed aboard. By that point everyone else (in the server/workstation market) was already running scared of HP's leviathan, and lobbing Intel into the mix only made it seem more inevitable.

      MIPS, PowerPC, Sparc all fell by the wayside under the inertia of Itanium. It didn't have to be good. It just had to be there and backed by such big players. ARM got away by running under the radar. x86 got away by means of its own inertia, and AMD shoving their x64 instructions in when Intel said it was "impossible".

      Shock and Awe, I suppose, killed off the others. But then ARM started to get faster and more powerful and started eating x64's lunch. No problem - x64 was also getting faster and more powerful and nibbling into Itanium's lunch. Itanium didn't have anyone left up the tree to start stealing lunches from. It started in a niche, lived in a niche and will die in a niche.

      Two things, though. I understand the Itaniums are incredibly reliable. They're intended for Mainframes and as such are much more dependable than Xeons. I'd be interested to know what makes them better if anyone knows? And secondly, I feel the world is poorer with a homogenised CPU landscape, so the loss of Itanium is a bit of a blow to diversity.

      1. Anonymous Coward
        Anonymous Coward

        Re: Did itanium fail?

        Leading Chip fabs were all in-house in the 90s and heading into 2000 had topped the $1b per fab per generation. Outside of Intel/HP/Sun/IBM, no one could afford that level of investment.

        SGI/DEC were already struggling for money and didn’t have realistic prospects of building their own fabs or getting others to adopt their designs. The dotCom boom let Sun continue but DEC was brought by Compaq who didn’t want to build fabs and who subsequently merged with HP. Intel and HP had already committed to Itanium in some form and the went with it. When the dotCom bubble burst, the path to smaller process nodes was unclear leaving no clear path for memory and CPU improvements, and everyone was hitting frequency scaling limits, Itanium looked like a promising option, albeit a little delayed.

        Then we had a wave of innovative changes with memory frequency improvements, improvements in fabs offering another 4+ generations of process shrinks and AMDs 64-bit extensions changing how x86 was viewed in the Enterprise.

        As for ARM eating x86s lunch, look at the profit margins on x86 at the budget end of the x86 market. It’s in the order of 100%, with top of the line Xeons making ten or more times that ARMs top margin products are in the 33%-50% range, and the volume products are closer to 10%. They both make money, but Intels model is significantly more profitable and likely to remain so for at least a few more generations.

  17. Howard Hanek
    Happy

    Ashes?

    Shouldn't that be.........spread the remaining silicon or, in its raw state, sand?

  18. Anonymous Coward
    Anonymous Coward

    Sad when

    It's got to be a sorry situation when Itanimum not only gets trounced by AMD and Arm, but even gets whipped (to an extent) by an ***IBM*** product (Power).

  19. Michael Wojcik Silver badge

    I won't miss debugging UB on the Itanic

    Itanium was responsible for one of those really horrible Heisenbug investigations, back in the day.

    Customer reports that once in a while, a server application closes the conversation without returning a response. The log shows the server caught a SIGILL (Illegal Instruction). This only happens on HP-UX on an Itanium box, and only very rarely. But they've caught a few instances of it.

    With significant effort we set up a reasonably close environment and try to reproduce. Can't get anything to happen manually, of course, so I set up an automated test with the debugger attached to the server and let it run. SIGILL in C code is most often caused by vectoring through a function pointer with a bogus value, so I spend my time pouring over the source, looking at all the function pointers that might be involved in this code path and their data flows. Can't find anything.

    Finally the debugger pops, with a SIGILL. Aha! Except... the instruction in question is valid. Its operands appear to be valid. The HP-UX debugger's support for low-level debugging is ... not great, and my knowledge of Itanium ISA is pretty much whatever I'm digging out of Google, and - shockingly - reading VLIW disassembly is a pain in the ass. But I'm not seeing the problem. And almost all the time we make it down this exact same code path with no problems.

    I ask on comp.os.hpux to see if anyone has any ideas. No one does. The problem lingers.

    Then, one day, one of our devs who knows HP-UX and Itanium particularly well sends around an email warning about a subtle potential issue with Itanium. The Itanium supports a trap representation for its integer registers - what's known as the NAT (Not A Thing) value. You can initialize a register to NAT, and if you try to use or store the value in that register, the CPU raises a trap.

    Oh. And ho. Let's take a quick look, shall we? Why, yes: when the HP-UX kernel sees that trap, it raises SIGILL for the offending process. It's called "Illegal Instruction", but someone at HP decided it should also be used for "Illegal Value". And didn't, say, update the signal(2) man page to reflect that.

    OK, so where are these pesky NATs coming from? (NB. Not "pesky gnats", which are usually due to a rotting Apple.) Well, the helpful note from the dev points out that this can happen if the caller of a C function believes the function returns a value, but the function itself actually does not. That's because:

    1. There's a dedicated register for returning a value

    2. If the function being called is declared as returning a value, the compiler always generates code to read that register on return, even if the caller doesn't use the return value

    3. However, a function which is actually defined as void return type doesn't set that register before returning

    4. Thus there is a chance that said register will contain a NAT

    Usually it won't, but sometimes it will. Impossible to guess the probability because it depends on what else is running at the time and the phase of the moon and your past misdeeds, etc.

    OK x 2. Now, how did we end up with a return-type mismatch between caller and callee?

    Turns out some of the code in question antedates standard C - it was actually written before 1989. ANSI/ISO-style function definitions and prototype declarations were added later, but for years we still had to support some platforms with pre-standard C implementations. So a bunch of the older source files had the ISO function definitions #ifdef'd, and the headers had the prototypes #ifdef'd.

    And the conditional compilation defaulted to not using them. If a macro ("PROTO" or something like that) was not defined, you got K&R definitions and no prototypes. Yes, this should have been conditional on the standard __STDC__ macro instead, but I wasn't around at the time to language-lawyer it.

    And someone had screwed up some Imake template file, so that -DPROTO was missing from some of the makefiles. Consequently, we had the source module with a called function being built with PROTO and correctly declaring the function as return-type void; and we had a source module that called it using it without a prototype. Which means defaulting to K&R semantics. Which means implicit int return type.

    I thought Itantic's NAT was a Good Thing. I like trap representations; they can be very useful. But HP-UX's handling of it was an obscure nightmare, and one that was all too easy to fall into.

  20. Anonymous Coward
    Anonymous Coward

    Itanic Inside

    I worked on Merced for a year and a quarter. 96-97, until I saw the iceberg ahead and jumped ship. It was the worst managed project I ever saw in all my years. No one inside Intel wanted to transfer in, and we were forbidden from transferring out, which was against Intel policy. Management refused to acknowledge the ever slipping schedule until forced to. Project goals were irrational, some groups got rewarded for finishing detailed layout and design of pieces of the chip that hadn't been verified yet. One group whichj was way late got rewarded with a trip to Disneyland when they caught up with the rest of us.

    The slowness of x86 emulation was due to starving the hardware that supported it.

    Oddly, the competitor management was scared of was Alpha.

    It's amazing Itanic hung on for so long. I wonder how much money Intel lost from this disaster.

  21. BlokeInTejas

    If Intel had been sure that Itanium was the architecture of the future - and it could have been: done right the silicon should have been simpler to design, simpler to verify and simpler to test, so products would come to market sooner and more cheaply, and the silicon would indeed suck less power and be cheaper - then they'd have had the courage of their convictions and launched it into the important market first- PCs and laptops. This would mean convincing some major PC makers up front, and making sure that the new machines could run x86 binaries. Then they could have tweaked one more thing, and persuaded Microsoft to distribute PC binaries in an architecture-independent form, with the final compilation to the actual computer being done at app installation time.

    On the assumption that the architecture really did have commercially-useful benefits in implementation, this would have let AMD continue to make x86s, but lag further and further behind, and set up a fun universe where the upstart ARM could find a place in PCs (purely on merit) and oh, yeah, the server market was now a continuous part - as far as software was concerned - of the extremely high volume PC market. My, they might even have got into cellphones.

    But, there y'go...

  22. Daniel von Asmuth

    Is there life after Intel's death?

    Has the 80x86 range been pulled off life support yet?

  23. chaos64

    The Itanium project was a proper leap into the dark, like the moonshot except it mispredicted the future - almost. Today almost every laptop, desktop or server using the Intel64/AMD64 platform will start up in UEFI BIOS - direct descendant of the early Itanium pre-boot environment. It is ironic that the above comments likely come from machines running that part of the Itanic which is truly unsinkable.

  24. StevenJacob098
    Thumb Up

    The Itanium venture was an appropriate jump into the dull, similar to the moonshot aside from it mispredicted the future - nearly. Today pretty much every workstation, work area or server utilizing the Intel64/AMD64 stage will start up in UEFI BIOS - a direct relative of the early Itanium pre-boot condition. Ironically the above remarks likely originate from machines running that piece of the Itanic which is really resilient.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like