back to article Supercomputing past masters resurface with coder-friendly cluster

The Supercomputing 2008 show in Austin is going to be the occasion for a lot of flashbacks, and not just because there are countless nerds on hand who came out of the University of California at Berkeley. The event is hosting the debut of a new supercomputer maker, Convey Computer, and the company's brain trust includes Steven …

COMMENTS

This topic is closed for new posts.
  1. Louis Savain

    The Baby Boomers Have Failed. Sorry.

    Let me be the first to congratulate Wallach on having solved the parallel programming crisis. Not!

    The baby boomers, due to their enfatuation with Turing machines and their addiction to everything sequential and algorithmic, gave us the parallel programming and software reliability crises. Now they are old and they've run out of ideas. It is time for them to peacefully retire and let a new generation of thinkers have a go at it.

    How to Solve the Parallel Programming Crisis:

    http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

  2. Anonymous Coward
    Joke

    protein sequencing

    So where is the add in card that will boost my folding@home stats?

  3. alex d

    My analysis

    It might very well be easy to use the personalities, but the packaged vector-op personalities are worthless. FPGAs cannot get good performance mascarading as CPUs, especially on floating point code. ALL the magic will come from creating custom "personalities," creating custom software written in a hardware description language that transforms your algorithm into transistors. And doing that is hard.

    The problem is that such languages are backwards, left behind by the demands of the semiconductor industry which doesn't value useability and hates change. The first object-oriented HDL was released barely three years ago, and few tools support fully it. There's also hardly any books on it. Prophesized new generations of languages that automate many of the basic problems of hardware design (particularly the rather difficult task of passing data from one part of your "program" to another, and making sure everything gets to where it needs to in the right number of cycles) have been promised for some time but have not materialized. Writing code that compiles to transistors is painful, few people know how to do it, and few people study the subject in school or in their spare time.

    If an independent software industry springs up made up of a few brilliant coders who create useful personalities and resell them, then the FPGA idea could thrive. (Like I said, utilizing already-made personalities seems easy.) But the vertical markets that this thing targets don't work like that. Every one of their algorithms is unique, they develop everything from scratch, and the FPGA programming model and the transistor languages will be hell for them.

    It's a wonderful idea, but i just don't think it's time for it yet. And if all you do is use the bundled vector personality, then a GPU will be ten times faster for, literally, one hundredth the price.

    Although, if you can really just offload code a few lines at a time (to create your own instruction), then rewriting those few lines in an HDL is not nearly as hard as rewriting an entire algorithm. There could be hope, if the granulity is there. Although to get even those few lines working you need to write many more to interface to the DRAM and the FSB. More information is needed on what tools Convey has really created (in those fleeting dozen months) to automate such chores.

This topic is closed for new posts.

Other stories you might like