back to article HP still NOT porting HP-UX to x86?

The Oracle and Hewlett-Packard lawsuit over the fate of Oracle's software support for Itanium processors, and therefore HP's HP-UX Unix variant, is under way in the Santa Clara County courts. New HP CEO Meg Whitman is making the rounds in the press and making her case to HP customers and partners at the Discover 2012 shindig in …

COMMENTS

This topic is closed for new posts.
  1. asdf
    FAIL

    the HP way

    Wow its even worse than it looks from the outside. It seems like these days many founders of companies are rolling over in their graves at what their companies have become (HP, Sony, etc) and some are rolling over even though they are still alive (Best Buy).

    1. asdf
      FAIL

      Re: the HP way

      You expect clarity and accurate roadmaps with the massive turnover at the top HP has had lately and with their latest clueless CEO (I know I will just be an axeman like Hurd and everyone will think I'm a bold leader)? Haha at best they might tell you what industries/markets they will be in for the next six months but even that should be taken with a grain of a salt as it could change at any time.

    2. Anonymous Coward
      Anonymous Coward

      Re: the HP way

      > It seems like these days many founders of companies are rolling over in their graves at what their companies have become

      I'm not sure if I agree with your statement in the case of Messrs Packard and Hewlett, but that's only because the company they funded is nowadays called Agilent. At least that's where the soul of the company has gone.

  2. Anonymous Coward
    Anonymous Coward

    This "run apps compiled for RHEL inside of HP-UX for x86" thing...

    ... isn't actually all that ambitious. The tech has been around for years, and is comparatively simple. It uses a "shim layer" to translate syscalls from one kind to another. With it NetBSD can for example run hp-ux 9 binaries (for 9k/300 and /400 series hardware, say) on any netbsd/68k port which would include, oh, the amiga. Of course support for specific hardware and OS features not present in the emulating system will be absent, but programs that refrain from using such run fine.

    FreeBSD uses the same tech to run linux binaries, and its ports collection uses fedora libraries for the necessary userland support. The same mechanism is used, and it actually started out, to provide backward compatability so that you can still run, say, FreeBSD 4.* binaries on FreeBSD 8.latest, should you wish to do so.

    So modulo how hp-ux/x86 would deal with syscalls, it could mostly steal BSD-licensed code to make it happen. Supposing hp-ux/x86 would happen in the first place.

    1. asdf

      Re: This "run apps compiled for RHEL inside of HP-UX for x86" thing...

      If I remember correctly though on FreeBSD running linux binaries it can only use a single core per process. That is what I remember from a few years ago so might have changed.

  3. Paul Crawford Silver badge

    Common socket?

    Why has it taken so long, and still not there, to make Itanic & Xeon hardware-compatible?

    My own view is the Itanium was a wasted development, but once started it seems crazy the Intel did not make then socket compatible so they could sell them to makes of x86 boxes as an alternative choice of CPU, rather than having to roll out white elephant hardware just for the Itanium.

    1. Anonymous Coward
      Anonymous Coward

      Re: Common socket?

      TPM wrote: "Intel and HP are not confirming the common socket"

      How would they confirm something that was only ever slideware? Or did it actually get to the stage of an announcement that nobody actually believed anyway?

      DEC tried the "common socket" with one of the Alphas and AMD K6 (?) sharing a socket pinout at one stage. There's more to the "common socket" game than at first appears; the common socket is only part of the challenge, then comes firmware/software, and don't forget that business empires and stovepipes can be just as important.

      1. J 27
        Holmes

        Re: Common socket?

        DEC wasn't the one who tried the common socket, it was actually AMD.

        You are thinking of the Alpha 21264 (EV6) bus (forgetting the name of the slot on Alpha), and the original Athlon (K7) using Slot A

        K6 used variants of the socket 7 (originally for Pentiums)

        The fact that AMD licensed the EV6 design, which was much better than anything else I'm aware of at the time, is IMO probably one of the bigger reasons, why for a while, AMD was crushing Intel performance wise. The first design AMD used which was their own interface was the K8 (Hammer/x86-64/Opteron/Athlon 64, etc) series.

        Additionally, there were any released motherboards which could handle both chips, despite being electrically & physically compatible. I think there was one motherboard which I recall a tech demonstration of, where there was a dual slot, with one of each. That was abandoned, as I think it was more of a 'look what we can do'.

        1. kain preacher

          Re: Common socket?

          AMD developed two Alpha 21264-compatible chipsets, the Irongate, also known as the AMD-751, and its successor, Irongate-2, also known as the AMD-761. These chipsets were developed for their Athlon microprocessors but due to AMD licensing the EV6 bus used in the Alpha from Digital, the Athlon and Alpha 21264 were compatible in terms of bus protocol. The Irongate was used by Samsung in their UP1000 and UP1100 motherboards. The Irongate-2 was used by Samsung in their UP1500 motherboard.

      2. Anonymous Coward
        Anonymous Coward

        Re: Common socket?

        No they didn't.

        AMD took the Alpha point-to-point interface and made a simplified version for the K7 CPU's.

        DEC had naff all to do with it other than letting AMD use some of there IP.

        AC as I should be back working now.

    2. Kebabbert

      Re: Common socket?

      My view on Itanium and x86: Intel wanted to kill off x86. Intel only had 32bit x86 and committed to Itanium. The x86 was buggy and bloated and old, and Intel wanted to start clean slate and also get a grip on future 64bit cpus with Itanium. Just like IBM wanted to do with PC clones: IBM wanted to kill them and make everybody use the proprietary PS/2. Unfortunately, the PS/2 did not survive, the PC clones won.

      Then AMD came and did x64 and revived x86, crossing Intels plans of getting the high end 64bit cpu for themselves. And x64 won and Itanium died. Lets face it, x86 has lot of baggage and is old and buggy. If all those transistors were spent on a new cpu architecture, then it would be twice as fast, for half the wattage and less buggy.

      1. Stoneshop

        Re: Common socket?

        Well yes, and that new architecture that Itanium was just wasn't twice as fast with half the watts compared to the x86 of the time, rather almost the other way round, half the speed with double the watts. And with several well-developed 64-bit CPUs around already, Intel should have done the smart thing and killed Itanium there and then.

        1. Destroy All Monsters Silver badge

          In memoriam

          Ten years already: http://www.osnews.com/story/636

        2. Kebabbert

          Re: Common socket?

          @Stoneshop

          "...Well yes, and that new architecture that Itanium was just wasn't twice as fast with half the watts compared to the x86 of the time, rather almost the other way round, half the speed with double the watts...."

          You are forgetting that Intel has put much much much more research & development into x86. If Intel had put equal amount of R&D into Itanium and x86, then Itanium would win.

          x86 has too much baggage dragging it down. But there are tremendous resources invested today, which makes x86 fast still. You can surely paint lipsticks on a pig and make it fly - but it is still a pig.

      2. Giles Jones Gold badge

        Re: Common socket?

        With Itanium they threw away all the best bits.

        http://www.theinquirer.net/inquirer/news/1008015/linus-torvalds-itanium-threw-x86

  4. emil 1

    At the VERY LEAST, you need to port the Aries emulator to the x86 platform. Aries needs to expand in scope to execute Itanium-specific binaries on x86.

    The OS-wrapper for this is irrelevant, as long as sufficient RAS features are present.

    If you truly want to reassure your customer base that you will not abandon us, then it would also be wise to approach BOTH Intel and AMD for x86 instruction-set modifications to optimize the performance of Aries. At this point, it might also be wise to consider the Alpha. The bigger the tent, the happier your customers will be.

    Should you choose to do this in Linux and the kernel maintainers accept it, you might consider initiating a broader emulation subsystem, perhaps addressing POWER and SPARC. IBM Transitive (Rosetta) is the (stagnant) market leader, but the benefits to you of GPL emulators for the "Boutique" platforms of your competetors are obvious: you are the x86 server market leader; their lunch is yours to eat.

    Here is an example of Aries on Itanium, running a PA-RISC binary from HP-UX 10.20. It is quiet, transparent, and impressive.

    # file /bin/ls

    /bin/ls: ELF-32 executable object file - IA64

    # file /usr/local/bin/gls

    /usr/local/bin/gls: PA-RISC1.1 shared executable dynamically linked dynamically linked

    # ./gls

    /usr/lib/dld.sl: Can't open shared library: /usr/local/lib/libintl.sl.2

    /usr/lib/dld.sl: No such file or directory

    ARIES32: Core file for PA32 application saved to /usr/local/bin/core.gls

    Abort(coredump)

    # cd ../lib

    # scp foo@bar:/usr/local/lib/libintl.sl.2 .

    foo@bar's password:

    libintl.sl.2 100% 48KB 48.2KB/s 00:00

    # cd ../bin

    # ./gls

    ...

    1. Ramazan
      Stop

      The only way you can modify x86 instruction set to optimize itanic/pa emulation is to embed sufficient part of itanic/pa core/cores into x86 CPU. Otherwise you are talking about The Java Experience (Knock-knock. Who's there? ... [2 minutes later:] Java).

    2. Dummy00001
      Thumb Down

      Open Source made the binary compatibility/transparent CPU emulation obsolete. Most software is available in source code form and can be easily recompiled.

      That of course might not work on HP-UX because in my experience, this is one of the most retarded UNIX variants out there. Runner up - AIX. But at least on AIX I can compile GCC and rest would just work. HP-UX? tough luck googling for binary packages and then wasting weeks kissing IT arses so that they would finally install them. (And often only to find that something's still missing. Rinse. Repeat.)

      In the end, lack of HP-UX/x64 might be an overrated problem: people migrate in droves to Linux/x64. And not without the help of the HP Services themselves, I might add.

      1. Smoking Man
        Happy

        Open Source on HP-UX

        Go http://hpux.connect.org.uk/

      2. Matt Bryant Silver badge
        Boffin

        Re: Dummy00001

        The problem is not in porting hp-ux to x64, it's in getting the VARs to do so with their apps. Itanium was a porting platform, it was designed to run either-endian and have oodles of registers so it could run any number of OSs and their apps with minimum fuss. Unfortunately, whilst there is plenty of VAR support for x64 from Intel and AMD, getting the port done is not as easy as Itanium. As Sun found out with Slowaris x86, the problem was that nothing that ran on SPARC would run on x86, so they had to try and convince all their VARs to recompile (and often rewrite) their apps to suit Slowaris x86. That didn't happen, or when it did happen it added so much to the VAR's costs that their new version of the app was uncompetitive with apps from other vendors that already ran on Linux or Windows. The VARs would have to be convinced to spend the money to run two dev streams in parallel, one on Itanium and one on x64.

        Which is why hp looked at a Linux app compatibility idea, allowing them to dig into the wide range of Linux apps already available. But then you have to convince customers that they will gain some advantage over running that app on hp-ux on x64 that you won't get on Linux, and after hp has done such a good job of selling Linux on Proliant (number one Linux server vendor for years), that would be rather hard to do without damaging that Linux server business, which is what they actually want to use to attack Snoreacle and IBM. So hp seem to have taken the decision that hp-ux on x64 would just be too expensive and disruptive to their own x64 server bizz. Hp-ux beat Linux on hp's Integrity servers because hp really could do more with hp-ux on Itanium VARs than RHEL could, but if it goes to x64 then RHEL are already streets ahead.

        The funny bit is that IBM, who have benefited most from Larry's tantrum, also face the same stark choice - port AIX to x64 or let it slowly die.

  5. Brian Miller

    But they laid off the OS developers!

    HP laid off their OS development staff some time ago. How are they going to port HP-UX to anything?

  6. Stoneshop

    Another victim

    Oracle's software support for Itanium processors, and therefore HP's HP-UX Unix variant

    And OpenVMS, and although that doesn't figure in a HP-SUX port to x86 roadmap (or for that matter, in any HP roadmap except as lip service), it's equally affected by Oracle killing Itanium support.

  7. Anonymous Coward
    IT Angle

    As if HP's roadmaps could get any more confusing, then this article appears. No fault of the author though, excellent write-up.

  8. Smoking Man
    FAIL

    Solutions? Strategy?

    Didn't Meg use the term "solutions" (intentionally put in quotes) more than once?

    And BCS keeps on focusing on a single processor, panicking if somebody names an alternative.

    This will once be taught in management schools:

    Q: How to build a $500m business using HP-UX?

    A: Take a business bigger than $4bn and have it managed by Martin Fink.

    1. Matt Bryant Silver badge
      Happy

      Re: Solutions? Strategy?

      "......Q: How to build a $500m business using HP-UX?...."

      Well, surely they would be more interested in the Sun case - $200bn down to $4bn.

  9. Matt Bryant Silver badge
    Facepalm

    Oracle defending Slowaris.

    Larry buys Sun and gets stuck with Slowaris. Hurd jumps ship, runs off to work for Larry, and no doubt mentions that hp's Kinetic plan includes bulldozing Oracle's Slowaris. Larry does a Steve Ballmer and a few chairs later decides to go to war with hp. Hurd's knowledge of Kinetic explains Larry's actions. Now, I wonder if Hurd had a confidentiallity clause in his contract?

  10. Anonymous Coward
    Anonymous Coward

    HP's inability to move off Itanic ...

    ... is very similar to Sun's fixation on SPARC.

    Just sayin'.

    1. Anonymous Coward
      Anonymous Coward

      Re: HP's inability to move off Itanic ...

      You mean the Solaris X86 running on many of machines doesn't exist?

      Nothing wrong with SPARC - very popular and powerful scalable, stable platform. Can the same be said of Itanium?

      The problem with HP is they don't have the software ability anymore to perform the port.

      All the good guys have gone.

      1. Anonymous Coward
        Anonymous Coward

        Re: HP's inability to move off Itanic ...

        > You mean the Solaris X86 running on many of machines doesn't exist?

        It exists, but it has a minimal market share, and Oracle isn't pushing it. Insofar as Solaris is concerned, Oracle is pushing SPARC. Solaris on Intel? Yeah, sure, it's a niche storage OS. Solaris on Intel is where database dumps go.

        Sun shot itself in the foot by announcing in 2003 that they were no longer interested in supporting Solaris on Intel, and they never recovered from that disastrous decision.

        > Nothing wrong with SPARC - very popular and powerful scalable, stable platform.

        Nothing wrong with SPARC other than it being a platform with no growth, no development community other than Oracle, and no future. Where exactly is SPARC so popular? At Oracle?

        Face it, Linux on Intel won.

        1. Mr Wrong

          Re: HP's inability to move off Itanic ...

          Actually Solaris on x86 became quite popular platform for not-so-much-mission-critical oracle installations lately. I have heard and seen many customers going for that during last couple of months. Usually on HP Proliant HW by the way :D People are afraid of choosing non-oracle system platform and linux is still missing too many things (decent clustering for instance) to be taken seriously for enterprise production workloads. Of course this is only temporary in my opinion, when only HP will port clustering /virtualization solutions from HPUX to Linux, there'll be no reason to use Solaris x86 anymore.

          1. Anonymous Coward
            Anonymous Coward

            Re: HP's inability to move off Itanic ...

            I don't know exactly where and when "Solaris on x86 became quite popular platform", especially after Oracle's announcement that they had no interest in the x86 hardware business.

            Please get off the "Linux is still missing too many things" bullshit. Linux doesn't have clustering or virtualization? Are you for real? You've never heard of Lustre or KVM?

            1. Mr Wrong

              Re: HP's inability to move off Itanic ...

              Oracle may not have interest in x86 platform but it's irrelevant, I said that people tend to use solaris on proliant. I haven't met anyone buying Sun x86 servers since they are oracle owned (I don't mean exa-stuff of course). People buy proliant servers and then run Solaris on it for oracle databases.

              "Linux doesn't have clustering or virtualization? Are you for real? You've never heard of Lustre or KVM?"

              I meant Linux as a platform for oracle database. Taking into account all important things, especially oracle licensing details, on x86 you can choose basically between windows, oracle linux or x86 solaris. Windows is out of question usually, and when comparing these 2, Solaris is much better offer, especially in terms of clustering and virtualization. Lustre and KVM are nice technologies but have absolutely no importance here. Have a look how oracle licenses and supports database on different virtual platforms, it's clear that you have no clue.

              1. Anonymous Coward
                Anonymous Coward

                Re: HP's inability to move off Itanic ...

                > I meant Linux as a platform for oracle database.

                Oh, you really meant that? Awesome!

                Have you heard of Oracle Enterprise Linux? It's a RedHat clone. Guess who sells it: Oracle. Guess what runs on it: Oracle databases.

                But, don't let me stifle your creative flow. Please, continue.

          2. Matt Bryant Silver badge
            FAIL

            Re: Re: HP's inability to move off Itanic ...

            "Actually Solaris on x86 became quite popular platform for not-so-much-mission-critical oracle installations lately...." Erm... where? Not seen any, and the other sysadmin and architect people I know don't seem to have come across it either.

            ".....inux is still missing too many things (decent clustering for instance)...." Ah, immediate troll exposure! You have obviously never touched clustering on Linux. Go look at RHEL, it's clustering has been excellent for years.

            ".....hen only HP will port clustering /virtualization solutions....." You really are just exposing your lack of knowledge. KVM has been around for years and is an enterprise-level product, let alone good old Xen. The virtualisation in hp-ux is good in width (more ways to virtualise), but I'd have to say Integrity Virtual Machines is no better than KVM. And as for hp's clustering, Serviceguard has been available for Linux for years, it just lags the hp-ux version.

            1. Mr Wrong

              Re: HP's inability to move off Itanic ...

              "Erm... where? Not seen any, and the other sysadmin and architect people I know don't seem to have come across it either."

              There are some parts of Earth outside UK, I think that's the reason.

              "Ah, immediate troll exposure! You have obviously never touched clustering on Linux. Go look at RHEL, it's clustering has been excellent for years."

              I'm working with linux and unix servers as oracle db admin for 15 years. RHEL clustering is very far from being excellent. Its functionality is pretty basic. Majority of RHEL clusters I've seen had HP Serviceguard installed instead, even if this is also far from perfect. I'm afraid you have no clue how clustering in Solaris works and how far and how nice it is integrated with oracle, since Sun years. There's no other solution on a market like this. Ask any oracle dba.

              "You really are just exposing your lack of knowledge. KVM has been around for years and is an enterprise-level product, let alone good old Xen. The virtualisation in hp-ux is good in width (more ways to virtualise), but I'd have to say Integrity Virtual Machines is no better than KVM. And as for hp's clustering, Serviceguard has been available for Linux for years, it just lags the hp-ux version"

              Now go and read oracle rdbms license and especially pricing part, in terms of different hypervisors used. Then we can discuss, cause now it really doesn't make sense as you have no clue.

              1. Matt Bryant Silver badge
                FAIL

                Re: Re: HP's inability to move off Itanic ...

                "....There are some parts of Earth outside UK...." I asked my mates that work in the US end of our company and they haven't seen it either. I'll drop an email to the teams in Australia and China just to keep you happy, but I'm not expecting any revelations.

                "....I'm working with linux and unix servers as oracle db admin for 15 years....." It is very obvious that you have worked with Slowaris for twelve years, and have spent the other three being instructed in why you should have moved to Linux five years before.

                ".....Solaris works and how far and how nice it is integrated with oracle...." Really? So Slowaris clustering was just so good that Oracle bought the RAC technology from Compaq? Please, get back under your bridge before you embarrass yourself further.

        2. Aitor 1

          Re: HP's inability to move off Itanic ...

          We do have IBM Opteron servers running Solaris... and they certainly are very stable. We are running Oracle, SAP, weblogic, tomcat, apache...

    2. Dummy00001

      Re: HP's inability to move off Itanic ...

      HP sells tons of Intel x64 servers running Solaris/x64, Linux and Windows (x64 vs. Itanic). Same is with the Sun (x64 vs. SPARC). Same with the IBM (x64 vs. POWER).

      It's just they want to keep the proprietary platform for proprietary long-term solutions, where they can lock customers down into upgrade and support contracts.

      But that what virtualization started recently cannibalizing. Problem in the past was that 3-5 years on, and you cannot find spare parts for the x86/x64 server anymore. With virtualization, the H/W of VM is in fact software; one installs new physical hardware and simply moves the VMs onto it. Start VMs, and they are not even aware that something changes and go on working as before.

    3. Matt Bryant Silver badge
      Stop

      Re: HP's inability to move off Itanic ...

      The difference is hp has the x64 bizz (and now an ARM server bizz), plus storage, networking, print, etc, etc, all generating revenue, whereas Sun's profit-engine was Slowaris-SPARC and very little else.

    4. P. Lee
      Linux

      Re: HP's inability to move off Itanic ...

      It isn't just HP's fixation. Differentiation is desirable at the customer end for non-technical reasons.

      The first reason is that mission critical stuff needs more expensive things which are difficult to justify. Pay millions for a superdome and you probably have more scope to acquire better staff than if you pop linux on a couple of acer desktops. That is why as an IT customer having to deal with internal politics you want different kit. X86 may be fine if you can add RAS features but if the host ends up being supported by wintel desktop support, you are in trouble.

      I'm not sure what the solution is though. Itanic is dead and a couple of extra years' support won't really help much. Perhap pouring money into server ARM is a solution, but then you have to also get (probably) open source software up to snuff and be able to sell it to Oracle customers. Perhaps this is where the "correctness" of postgress would be a boon, but migrating customers is a tricky business. A managed service is probably the way to go, but that's all very uncertain business when Oracle can step in and offer to maintain their Oracle installation on Oracle kit with Oracle support.

      1. Anonymous Coward
        Anonymous Coward

        Re: HP's inability to move off Itanic ...

        "A managed service is probably the way to go, but that's all very uncertain business when Oracle can step in and offer to maintain their Oracle installation on Oracle kit with Oracle support."

        It won't. Anyone ever using oracle support can tell you lot of things about their horrible quality and usual lack of any clue whatsoever. And it's about database/apps, things they should know best. I can't even imagine how could they offer full stack service, they have no people, no experience etc. They can maybe offer apps in SaaS model - but they do it for years and earth didn't shake because of that.

  11. Anonymous Coward
    Anonymous Coward

    They should never have dumped PA-RISC

    Bloody idiots - and what a waste of time Itanium has turned out to be. Underperforming and overpriced. If they'd stayed with their own CPUs they could have blown it out of the water and with Sun Sparcs now being an Oracle afterthought they could have done really well.

    Just my 2ps worth.

    1. Anonymous Coward
      Anonymous Coward

      Re: They should never have dumped PA-RISC

      Alpha would have been another good chip to keep in the line. HP was just bent on getting out of the high tech business. There are too many misses, too much uncertain R&D investment, too much change for HP's industrial minded mangement team. This all started with Fiorina's drive to basically get HP out of the technology business. Her plan was to partner with the tech companies for critical components (Intel, MS, Oracle) and beat their competitiors with volume efficencies in manufacturing and supply chain instead of innovation.

      1. Destroy All Monsters Silver badge
        Unhappy

        Re: They should never have dumped PA-RISC

        Back when Ashlee Vance was still writing and Andrew Orlowski had other concerns than to whiteknight "Intellectual Property Rights" faggotry, the following was said in

        http://www.theregister.co.uk/2002/05/03/don_capellas_articulates_hpaqs_vision/

        ...Capellas mentioned "procurement" as often as navvies swear, and the chap obviously believes that this is the killer feature of the SRCAM merger.

        "We've got to do this. We can't do microprocessor better than Intel," he added.

        Well, we mused, you can't now, now that you've sold Alpha to Intel.

        http://www.theregister.co.uk/2002/05/14/sircampaq_the_winners_and_losers/

        "Last week Capellas promised that Windows and Linux would "eviscerate" mid-range Unix. Taking no prisoners, the Don has decided to perform the task himself at the first opportunity."

        http://www.theregister.co.uk/2001/11/16/hpaq_execs_pocket_millions/

        As a reward for sacking some of the 15,000 staff who won't be needed in the merged HPaq, HP's top management will pocket $33.1 million in retention bonuses. Compaq's team will receive $22.4 million. The bonuses hinge on the successful completion of the merger.

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: They should never have dumped PA-RISC

          ".....Capellas promised that Windows and Linux would "eviscerate" mid-range Unix...." Yeah, you did notice that big and ongoing decline in UNIX over the last decade or so, right, you just forgot to realise that it was x64 eating it up?

      2. Matt Bryant Silver badge
        FAIL

        Re: Re: They should never have dumped PA-RISC

        "...Fiorina's drive to basically get HP out of the technology business...." Except Fiorina's big project was the Adaptive Enterprise - essential SaaS, cloud, all that kind of stuff - years before anyone in the industry even had a plan. Problem was it came in after the Y2K cash-burn and companies didn't want to rip out their existing kit to replace it all in one go, so hp had to go away and break it into modules. Oh, is that an example of a technology far in advance of the rest of the market, which would expose the silliness of your post? Why, yes it was! Back under your bridge, troll.

        1. Anonymous Coward
          Anonymous Coward

          Re: They should never have dumped PA-RISC

          Case in point with Fiorina divesting technology = adaptive enterprise. Instead of HP trying to go the IBM or new Oracle route of designing, building and implementing the solution from silicon through applications, AE was a bunch of "blueprints" which co-opted all kinds of third party technologies (SAP, Oracle, Microsoft, Intel, so on) into an HP framework. HP wasn't actually doing any of the heavy lifting, such as designing and fabing a microprocessor or writing a database, they were just providing a framework for other companies' technologies to be glued together.

          1. Matt Bryant Silver badge
            FAIL

            Re: They should never have dumped PA-RISC

            ".....Instead of HP trying to go the IBM or new Oracle route of designing, building and implementing the solution from silicon through applications...." So you're trying to say it was bad for hp to offer choice rather than lock-in? Yeah, I can see why us customers wouldn't like choice....

    2. Matt Bryant Silver badge
      FAIL

      Re: They should never have dumped PA-RISC

      PA-RISC, as with all RISC designs, has limitations that were being fast approached. Itanium has long-since exceeded the performance of Alpha and PA-RISC. I assume that news didn't get under your bridge.

      1. Anonymous Coward
        Facepalm

        Re: They should never have dumped PA-RISC

        "PA-RISC, as with all RISC designs, has limitations that were being fast approached. "

        Such as? C'mon genius, fill us in as to what these supposed RISC show stoppers are? (I guess the Sparc and Power dev teams didn't get your memo)

        "Itanium has long-since exceeded the performance of Alpha and PA-RISC."

        Wow , you mean a current design of CPU exceeds the performance of one whose development ceased pretty much in the late 90s and early 2000s respectively? You do bloody amaze me!

        Tell us oh Wise One , do you think perhaps if serious R&D had been spent on those chips that they might possibly have far surpassed the Itanium?

        "I assume that news didn't get under your bridge."

        No idea, you seem to be occupying the spot under there at the moment.

        1. Dazed and Confused
          Boffin

          Re: They should never have dumped PA-RISC

          >> "PA-RISC, as with all RISC designs, has limitations that were being fast approached. "

          > Such as? C'mon genius, fill us in as to what these supposed RISC show stoppers are?

          Such as the limitation on only being able to retire 4 instructions per clock cycle. Even the basic McKinnely could do 6, not that the Madison -> Tukwilla cores extend that, but the architecture makes it practical. The article on the next gen Itanium published here a couple of years back said it would be able to compete 12 instructions per clock cycle. (Just a pity real world code rarely gets close)

          RISC designs make handling inter instructional dependencies difficult (read expensive in transistor counts), This severely limits OoO and speculative execution.

          1. Anonymous Coward
            Anonymous Coward

            Re: They should never have dumped PA-RISC

            You really should not talk execution width without mentioning clock rate. This seemed to be a pretty serious Itanium shortfall. This and what seems to be mostly in-order execution, overuse of predication, lack of immediate based addressing, large instruction size, requirements on compiler, etc. etc.

            Comparing to IBM's POWER7, which is 6 wide at 4.25 GHz, the superior 1.66 Ghz Montvale would need to be 15.3 wide just to keep up. This and the other shortcomings make it far from a good ISA.

            1. Matt Bryant Silver badge
              FAIL

              Re: Re: They should never have dumped PA-RISC

              ".....Comparing to IBM's POWER7....." Yeah, which is just such an amazing tech that IBM can't provide a roadmap past Power8, and even that is nothing more than a placemarker. There is only so long that IBM can carry on flogging what is obviously a dead horse.

              1. Anonymous Coward
                Anonymous Coward

                Re: They should never have dumped PA-RISC

                "There is only so long that IBM can carry on flogging what is obviously a dead horse."

                What? I think you are confused. Itanium is the chip that is dead... see article. IBM's Power 7 is the chip that is taking over the UNIX market.

          2. Anonymous Coward
            Anonymous Coward

            Re: They should never have dumped PA-RISC

            "Such as the limitation on only being able to retire 4 instructions per clock cycle"

            Ah , you mean in the same way that people 20 years ago were saying RISC could only ever manage 1 instruction per clock cycle?

            "RISC designs make handling inter instructional dependencies difficult (read expensive in transistor counts), This severely limits OoO and speculative execution."

            Thats why you have smart compilers for RISC that do instruction re-ordering so its not required for the CPU to do as much. That's a problem that was pretty much solved years ago.

            1. Dazed and Confused

              Re: They should never have dumped PA-RISC

              > Thats why you have smart compilers for RISC that do instruction re-ordering so its not required for the CPU to do as much. That's a problem that was pretty much solved years ago.

              Actually that is exactly the whole idea behind PA3 (aka Itanium) to offload the instruction re-ordering from the CPU to the compiler. But even without the performance aspects a RISC design is required to execute instructions correctly, whereas Itanium allows for the garbage in garbage out scenario. So on a PA processor if you try to perform an ADD and write the result into a say GR1 and a LD to read the value out of GR1 in adjacent instructions the PA Risc (and other RISC processors AFAIK) is required to stall execution until the value is available from GR1. Now from an instruction scheduling perspective this is a dumb thing to ask the CPU to do, but the CPU must detect this situation and handle the instructions in order (OoO execution is speculative). Normally the compiler would make sure that you don't do dumb things like this, but if you write assembler if can happen.

              For an Itanium processor then the order of execution of instructions between "stops" is indeterminate, so the chip is not required to check for register interlock.

              This was the point of my original posting. And yes I know it doesn't tell the whole story, but this is an example of the limitation inherent in RISC processor designs. That doesn't mean to say that there aren't design limitations in IA64 or potential advantages in SE OoO RISC designs.

          3. L.B.

            Re: They should never have dumped PA-RISC

            "RISC designs make handling inter instructional dependencies difficult (read expensive in transistor counts), This severely limits OoO and speculative execution." - Complete coblers.

            The Alpha 21264 was much smaller than those EPIC failures. Plus the 21264 was doing OoO and Speculative Execution and could execute up to 6 instructions at once, 5 years before Merced limped out the door (21264 was released in 1996, Merced 2001).

            FACTS:

            Alpha 21264 = 15.2 million transistors (6 million for logic)

            Itanium2 = 221 million transistors (25 million for logic - same as Merced)

            Merced was nearly a decade in the making and was a complete failure on every technical metric, the only thing Intel (+HP) managed with it was to convince the nieve bosses and Compaq(+DEC), SGI (MIPS) and HP was to stop development of thier own CPUs, believing the Intel hype/bullcrap.

            The biggest laugh being that Merced went from the "Super chip of the future" to "Evaluation prototype" status within a month of release.

            It took almost 4 years with zero developments in Alpha/PA/MIPS for Intel to produce CPUs that were even comparable to the old EV6 and equivalent products (for real software not rigged benchmarks).

            "McKinnely could do 6, not that the Madison -> Tukwilla cores extend that" - Yes, they didn't extend it because when they actually looked at the vast majority of code produced by compilers they found that 25-50% of the instruction blocks were full of no-ops due to instruction dependencies.

  12. Dazed and Confused

    @L.B.

    My apologies, my recollection was that the Alpha 21264 was also limited to completing only 4 instructions per clock, although like PA2 could be executing a lot more. But as you say, it was over 15 years backs and never a primary interest of mine.

    Merced wasn't a great success was it :-)

    McKinnely was designed at HP, and came in on target. Merced had largely done its damage by then.

    Or more precisely the DEC/Intel deal to sell the Alpha team had resulted in AMD acquiring a fired up group of chip engineers who went off an produced the x86_64.... sadly the rest is history.

  13. Anonymous Coward
    Anonymous Coward

    "Itanium has long-since exceeded the performance of x86-64"

    Oh, alright I'll quote properly.

    "Itanium has long-since exceeded the performance of Alpha and PA-RISC. "

    How about x86-64 then?

    What kind of person voluntarily buys IA64 if the software they need is available on a Proliant dl980 g7 8core job (8 sockets, ie 80 cores, ie more cores than a Superdome. More memory than a Superdome. And more QuickPath IO than a Superdome.

    But if you're historically locked into HP-UX or OpenVMS, IA64 is your only option.

    1. Anonymous Coward
      Anonymous Coward

      Re: "Itanium has long-since exceeded the performance of x86-64"

      > What kind of person voluntarily buys IA64 if the software they need is available on a Proliant dl980 g7 8core job (8 sockets, ie 80 cores, ie more cores than a Superdome. More memory than a Superdome. And more QuickPath IO than a Superdome.

      Err, only 80 cores? that ain't SuperDome territory.

      Only 2TB of RAM, SD had that donkey's years back, SD2 manages 4.

      It might manage more QuickPath IO, but it doesn't pack the over all IO capacity, its short of about 170 something PCI slots.

      And while the rumours are that Oracle won't sign off the TPC numbers (coz they embarrass the Sun boxes) for the DL980 the rumoured numbers suggest that it hasn't reached the SD's 2007 thoughput.

      One the other hand, it would of course manage its score at a tiny fraction of the Price/tpmC figure of the SD.

      1. Anonymous Coward
        Anonymous Coward

        Re: "Itanium has long-since exceeded the performance of x86-64"

        It would be pretty sad if it did not exceed the 2007 number of 4,092,799. This 256 core SD result was bested by a 64 core POWER6 at 6,085,166 in 2008.

        1. Anonymous Coward
          Anonymous Coward

          Re: "Itanium has long-since exceeded the performance of x86-64"

          The SD can't take 256cores, it was 64 dual core chips. The killer against the IBM score was the memory capacity limited to 2TB (like the DL980). TPC figures are intimately tied to the RAM in the box.

          Having said that. the IBM score was damn impressive, particularly as no one has got close after 4 years (but then HP & Oracle aren't going to co-operate are they - note no SD2 score)

          Its still a bloody stupid bench mark however.

This topic is closed for new posts.

Other stories you might like