back to article Microsoft juices C++ for massively parallel computing

Microsoft has announced a new technology designed to help C++ developers build massively parallel applications. Known as C++ Accelerated Massive Parallelism – or C++ AMP for short – the technology will be included in the next version of the company's Visual C++ compiler, and Microsoft plans to open up the specification for …

COMMENTS

This topic is closed for new posts.
  1. sT0rNG b4R3 duRiD
    Devil

    "the jump to C++ AMP is minimal"

    Ok... I'll bite.

    But I warn you, I'll spit it out if I don't like it :P

    How much then will it cost, hmmm ?

    (I haven't even upgraded to 2010)

  2. amanfromMars 1 Silver badge

    "the jump to C++ AMP is minimal" ?

    Methinks what Sutter and Microsoft are not telling you, for perfectly legitimate proprietary intellectual property protection and exclusive exploitation reasons ie the boundless fortune available to C++AMP Followers and SMART Programmers, is that the jump also requires a quantum leap into alternative mindset systems for the easy Presentation of Future Projects and Virtual Reality Promotions and Premieres ......... CyberIntelAIgent Launches for Power and Control of the SMART IntelAIgent Space which Shares for Exploitation and Ab Fab Lab Research and Development.

    But that is a field which all SMART Intelligent Services are also busy kitting out with their own user-friendly magic buttons and invisible levers/virtual connections and deep underground channels of communication.

    Microsoft, and Uncle Sam, late to the party again .... and playing catch up/gossip gather up for metadatabase reverse engineering of phished private stock options to discover Base Mine and Core Ore Sources ...... aka Creative Universal Meme.

    1. Anonymous Coward
      WTF?

      newSMARTintelAIgentHyperRadioProActiveClouds

      re: "the jump to C++ AMP is minimal", amanfromMars 1, Thursday 16th June 2011 08:14 GMT

      Methinks you are you some kind of a nuTBot?

      http://www.wordle.net/show/wrdl/3773182/amanfromMars

      key words: Ab Fab Lab Research and Development, Base Mine, C++AMP Followers, Core Ore Sources, Creative Universal Meme, CyberIntelAIgent, SMART IntelAIgent Space, SMART Intelligent Services, SMART Programmers, Virtual Reality Promotions, alternative mindset systems, boundless fortune, deep underground channels of communication., exclusive exploitation reasons, invisible levers/virtual connections, legitimate proprietary intellectual property protection, metadatabase reverse engineering, phished private stock options, quantum leap, user-friendly magic buttons ...

  3. Tom 7

    "the jump to C++ AMP is minimal"

    yup - there seems to be absolutely nothing new in there at all - unless you are surprised by MS trying to claim a jump on a flown bird.

  4. AlanGriffiths

    Why choose a Microsoft technology?

    "Nvidia's CUDA, for example, is tuned to its own GPUs, and Sutter admitted in a post-keynote Q&A that "if you want to get the absolute best performance from one vendor's GPU, you will hardly be able to do better than that vendor's GPU stack". Then there's open-source OpenCL – hardly a vendor-specific approach to GPGPU computing."

    This is one of the delights of C++ - there are so many options to choose from. I prefer my code to be as portable as possible - across hardware, OS and compilers. Obviously, this cannot always be achieved but there's nothing about parallelism that requires Windows and Visual Studio. Selecting this technology would just make things harder to deploying on Linux,OSx, etc.

    1. Ru
      Meh

      Because you're an MS shop?

      Windows is hardly the first thing that pops into people's heads when they're looking for compute cluster platforms, but it can do a perfectly reasonable job especially if you already have the devs and sysadmins and license agreements to hand. This is probably more of a dogfooding exercise; I'll bet MS have been using it internally for their own stuff and see no reason not to provide it to everyone else too.

      I wonder why OpenMP wasn't good enough though. Maybe it just didn't cover enough of the parallel tasks they wanted. Nonetheless, OMP is well supported by several compilers across several platforms (including several versions of Visual Studio), and OMP code can be built just fine in a single threaded fashion using non-OMP capable compilers. I'll bet AMP doesn't have that same advantage.

  5. FSW

    Really?

    He lost a little bit of credibility when he claimed a child could write a brute force password cracking system spread across millions of cores spread across the cloud.

    1. Anonymous Coward
      Anonymous Coward

      Pedant

      I believe the man was just pointing out that it is getting easier all the time to write massively distributed and computationally intensive applications, which makes previously hard problems solvable by a larger group of developers (of which I imagine some whizzkid teenager could be part of).

      But I guess if you want to intentionally ignore his point to allow you to make a snide comment then thats your call.

    2. Shakje

      Whatever you think about MS

      I really don't think Sutter has to prove his credibility to anyone.

  6. JDX Gold badge

    OMP

    Shame they didn't embrace OpenMP which IS a standard. However, I think the most important thing is whether MS are launching this as a proprietary technology, or are trying to push something which others will get on board with.

  7. DaveHenderson

    Threading Building Blocks

    Also worth a look in this area: http://threadingbuildingblocks.org/ - easy to use and open source. Targetted more at multiple-CPUs, than GPUs, but then it is from Intel.

  8. Mechanic
    Boffin

    The problem with compiler level parallelism ...

    is always speed. I come from a background in high performance scientific computing, and I'm just about old enough to remember HPF and how poor it could often be when running on distributed memory machines. We nowadays use MPI which requires significantly more programmer effort then either HPF or the more modern openMP but is almost always significantly more scalable. For our main fluid code we could only scale a given problem realistically to about 32 processors in HPF, about 64 in openMP and to the entire machine (600 processors) using MPI. The same code has more recently been scaled to 10s of thousands of processors in it's MPI format. Even the best compilers are no substitute for good human design, and if you're talking about hundreds of thousands of cores then I just don't see any scheme like openMP or a rejuvenated HPF ever being useful unless your workload is embarrassingly parallel.

    I don't really know that much about non-scientific parallel workloads, but for the type of things that I work with better algorithm design to improve data locality will always help more than more sophisticated compiler design.

  9. Anonymous Coward
    Anonymous Coward

    Camp

    Really?

  10. Anonymous Coward
    Anonymous Coward

    Snivelling Coward Reporting In

    " ... and Microsoft plans to open up the specification for others to use."

    Oh how nice of them. Where would we all be by now without Microsoft?

    1. FozzyBear
      Thumb Up

      Where would we all be by now without Microsoft?

      I have no idea, but It is nice to dream the about alternatives

  11. JDX Gold badge

    huh

    Yeah I'd love a world where my app will only run on about 10% of the PCs unless I make 10 different versions of it. Mobile developers are welcome to that world, I don't want it.

This topic is closed for new posts.

Other stories you might like