back to article Intel: 'Y'know that GPU-accelerated computing craze? Fuggedaboutit'

From Intel's point of view, today's hottest trend in high-performance computing – GPU acceleration – is just a phase, one that will be superseded by the advent of many-core CPUs, beginning with Chipzilla's next-generation Xeon Phi, codenamed "Knights Landing". "Tomorrow looks quite fundamentally different," Rajeeb Hazra, VP of …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    61 cores meh

    Multi cores are nice if you have the software optimised to use all cores but GPUs are optmised for vector processing.

    In order to levellerage the raw power of this intel kit then you need a OS and apps optimised to run on these 61 cores, cerainly not something that is availible off the shelf.

    If you go the intel route then you going to need to pay M$ on a per CPU basis or go with a real optimised OS based upon unix but you are still going to need the vector math that everyone's applications are used to and that still means GPUs.

    Have intel said anything about when their optimised OS and apps are going to be availible to go with their new kit? this because raw power alone has never been the key to market share and without the software to match the hardware then it isnt going to measure up at all. The GPU market already have their software in place

    1. Stephen Channell
      Unhappy

      MIC likes Monte Carlo

      for embarrassingly parallel apps like Monte-Carlo simulation of alternative futures Phi wins over GPGPU (which wins for fluid dynamics) with little code change, especially for portfolios that need virtual function dereferencing.

      No shelling cash to M$ (Phi has an embedded Linux kernel), doesn't run Windows (which is a shame because it precludes .NET)

    2. JLH

      Re: 61 cores meh

      "In order to levellerage the raw power of this intel kit then you need a OS and apps optimised to run on these 61 cores, cerainly not something that is availible off the shelf."

      Sorry to be a Linux fanboi (Iam actually), but Linux runs on hundreds of cores on SMP machines already.

      Applications already can scale to 1000's of cores - OK I'll give you the 'optimised' quote, but you already have applications running on multicore SMP machines.

  2. Destroy All Monsters Silver badge
    Holmes

    "Yeah, Intel. What are you going to do, bleed on me??"

    Apparently there is some confusion about how multicore is in competition with GPGPU processor. Newsflash: They are complementary. So cramming more Intel CPUs in there is good, as long as you can afford the gas turbine to power them and the Freon recirculator remove the waste heat, but this wont cause the GPGPUs to be dropped on the floor.

    Still waiting for NVidia or AMD to attached ARM cores to their boards, so that the motherboard becomes just a dispatcher.

    1. Tom Womack

      Re: "Yeah, Intel. What are you going to do, bleed on me??"

      GPGPU and many-core are in very much the same regime; Intel gives you 456 1.1GHz double-precision FP units in 300W, and at a similar price nVidia gives you 832 706MHz double-precision FP units in 225 watts.

    2. Anonymous Coward
      Anonymous Coward

      Re: "Yeah, Intel. What are you going to do, bleed on me??"

      NVidia will be placing ARM cores on their next-gen Maxwell GPUs in an effort to stem the bleeding that AMD's HSA implementation and design wins are causing.

      AMD Kaveri will allow unified addressing between not only its CPU and iGPU, but also with dGPUs of the R9 290* and R7 260X varieties, thanks to their IOMMUs.

      CUDA 6 provides NVidia with a proprietary stop-gap answer to AMD's lower-latency single-chip HSA solutions but I'm rather more interested in feeding dGPUs with Steamroller cores than with whatever wimpy ARM NVidia comes up with for Maxwell.

      1. anekemak

        Re: "Yeah, Intel. What are you going to do, bleed on me??"

        what did you mean that kaveri allow unified addressing between cpu and r9 290's ? do you mean that kaveri works with r9 290 together ?

  3. Zola
    FAIL

    Intel failed at making decent GPUs

    So of course they're going to say GPU compute is just a phase.

    When all you are able to design is a CPU...

    That's not to say that whatever Intel create doesn't have a place, but rubbishing the alternatives because you tried and failed doesn't reflect well on Intel.

    1. Infernoz Bronze badge
      Facepalm

      Re: Intel failed at making decent GPUs

      Not just decent GPUs, but cost on everything they make too!

      I can build equivalent speed AMD systems /much/ cheaper because the APU (not just CPU) and chipsets are cheaper, the chipsets are often better, and the commodity AMD motherboards are often a lot better and cheaper too!

      Intel will keep losing market share, because ARM, AMD, and NVIDIA will keep chipping are away at the low/mid end, then eat Intel from the bottom up, leaving Intel just the top end, and the size of the top end may well shrink as more tasks can be done concurrently.

      Intel are frankly wasting their time at the low end of the market, because their architectures are too bloated, ugly, and costly, and process shrinks only get so much, at a much higher cost!

      1. Dr. Mouse

        Re: Intel failed at making decent GPUs

        ARM, AMD, and NVIDIA will... eat Intel from the bottom up

        Oh, Matron!

    2. Charlie Clark Silver badge

      Re: Intel failed at making decent GPUs

      Credit where credit's due: the most recent integrated ones are comparable with those from AMD and nVidia and this has been driven by the work on Phi.

      Intel is aware of the competition. It is just poorly placed to compete on an increasingly important aspect: price. This is always relative as, while supercomputers are not cheap, it was the relative cheapness of x86 that drove their increasing adoption over the last few years: IBM can afford to do prestige projects at around cost, but the rest of the non-x86 chip vendors couldn't.

  4. Mikel

    Always jam tomorrow

    http://en.m.wikipedia.org/wiki/Jam_tomorrow

    1. Charlie Clark Silver badge

      Re: Always jam tomorrow

      Yes, especially when an article just regurgitates company PR statements.

  5. Anonymous Coward
    WTF?

    The co-processor board sure looks familiar. I had that in my A2000 (1987). where I had both a 68030, with all its coprocessors, and a '386/7 processor board, when that came out. That's where I learned heterogeneous computing and all the other multi-everything.

    Back to the past, yo!

This topic is closed for new posts.

Other stories you might like