back to article Nvidia and ARM: It's a parallel, parallel, parallel world

Nvidia envisions a future in which ARM processors and the GPU-maker's CUDA parallel-computing platform and programming model will work together in perfect harmony, and the company has a raft of planned CUDA enhancements to not only make that coexistence seamless, but to enhance that programming environment for discrete GPUs, as …

COMMENTS

This topic is closed for new posts.
  1. Charles Manning

    Not just for CUDA

    ARM will be an important platform for **everything**.

    ARM pretty much covers everything from very low (sub-$1 microcontrollers) up to some servers.

    There is increasingly only niche space for really tiny 8-bit micros and really big top-end microprocessors (servers, desktops etc).

    As ARM expands (in both directions), the space for non-ARM players gets smaller.

  2. M Gale

    Python

    but coding in it is productive, interactive, and "even fun."

    Maybe for you. I find it somewhere between COBOL, Perl and a wisdom tooth extraction in terms of the "fun" to be had.

    1. Hungry Sean
      Mushroom

      Re: Python

      can't speak to cobol, but I find it far closer to the wisdom tooth extraction than perl, except when you get your wisdom teeth out, they normally give you Vicoden or something nice.

      Perl does its best to get out of my fucking way. Python, like Java, has an agenda, and I don't appreciate that.

    2. Charlie Clark Silver badge

      Re: Python

      I think the point about using in Python is that many of the users of HPC are scientists with no formal training in programming. Python has several libraries like NumPy that have made it popular in diverse scientific domains and the are companies that also serve and support them. I think the combination is what continues to make it a popular language for scientists. CERN has some nice graphics that illustrate the increase in Python code once the code for heavy-lifting (usally in C++ or FORTRAN) has been written. Benedit Hegner covered this in his keynote at the German Python conference last November. An hour long and in German and thoroughly entertaining.

  3. Anonymous Custard

    Too Little Too Late?

    Shame really given the overnight news about the next-gen Nexus 7 business going from Nvidia/Tegra to Qualcomm/Snapdragon. Given that's where a fairly large percentage of the Tegra 3 chips ended up (~80% by some reports), the future prospects for Nvidia may have taken something of a knock.

    1. qwarty

      Re: Too Little Too Late?

      Tegra 4 is certainly too late. Also, Google like diversity when dealing with suppliers of reference devices so the Nexus 7 refresh switch is hardly a surprise. Rumor also has it that Microsoft will switch from Tegra for Surface RT 2013 models. All the same if Nvidia succeed in catching up and we see Tegra 5 Logan samples late Autumn with Kepler integration etc., they must be a strong contender for the next wave of 2014 devices. A few million orders here or there doesn't say much about anyones future prospects in such a fast moving industry. What with the ARM competition, Intel Haswell and the upcoming 14nm process I'd hate to place any bets in the mobile space concerning the next couple of years except whatever happens it will almost certainly benefit us users of the devices.

  4. Hcobb
    Boffin

    Speaking of Java

    Wouldn't a run-time compiled language like Java be the perfect fit for strange hybrid architectures? You send code that has been checked for safety and correctness to the machine to run it. The code is then split off into bits and pieces to different functional units, adjusting in real time to the actual code flow to put data and code where they best fit.

    Simply taking existing x86 bytecodes and reverse compiling these on the fly might introduce all sorts of strange timing dependencies unknown and untested by the programmers.

  5. raziv

    Where's the challenge?

    I have a feeling that with increasing parallelism and diversity of architecture, it may be impossible for a programmer to write the optimal parallelization into the code. It may be more of a challenge for the development environment or compiler designers to best utilize the architecture's capabilities.

This topic is closed for new posts.

Other stories you might like