back to article Intel: Our first five parallel-computing schools are open for business

Chip giant Intel has begun funding research into ways to make applications easier to write for parallel processing systems, and to teach those methods to future generations of computer engineers and scientists. The first five institutions to qualify as Intel Parallel Computing Centres are: the Konrad Zuse Information …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Doing things in parallel isn't really how we think.

    1. Destroy All Monsters Silver badge

      That may be true be is also irrelevant.

      Besides the lower-level layers are definitely parallel.

      Go even lower and apparently you have to deal with the powerset.

  2. RichWa

    Parallel is Not that Hard

    The only reason we're being pushed onto parallel is thermodynamics and the heat that prevented us from going faster, ergo multi-core ergo creating the need to force parallel on us. Along with, of course, some execellent marketing. (Remember turbo-mode where cores are shut down so a single core can go faster is now touted as a feature.)

    The difficulty with parallel is not in designing parallel; it's simply that the problems typical people use computers for are linear and iterative thereby not lending themselves to parallel solutions. If a problem is analysed objectively and the solution is based upon the analysis it's not really complicated to implement regardless of whether it's uses parallelism or is linear. When the solution is linear and we try to implement it as parallel it gets ugly and way too complicated.

  3. Faye B

    Already Here?

    We already use parallel processing to a certain extent. It's called multi-threading, the only real difference being that the processor and underlying OS handles this parallelism by such techniques a time slicing. The reason for avoiding it in the past has been the additional complexity it adds to the code and the greater dificulty in testing. Even the humble smartphone uses it, particularly when connecting to the network or running some kind of media, requiring asynchronous and time-boxed funtionality such as screen refresh for animation.

    So initially we should see an improvement in the OS as it handles multiple threads truely in parallel followed by more complex software being able to run faster and more efficiently, making it possible for higher AI functions to be implemented on more compact devices. Currently SIRI is being run as a thin client to a massive backend engine but that could change with this new chipset. Also facial recognition and security could be improved, not to mention how speech recognition would expand to speech comprehension and translation.

    I think what scares people the most is the idea of extreme parallel processing of the sort needed to analyse or create video in real time, where each pixel is subjected to its own process but this is already happening with video graphics cards. Or my personal favourite, neural networks where each 'neuron' is given its own process to react independently of but collaborativley with other neurons to form a network. That's the kind of parallel processing I am looking forward to.

  4. Crisp

    Easy Access to Parallel Computing

    Nvidia's CUDA is a great introduction to parallel computing. Recently, I've been looking at CUDA to do financial calculations. A problem that lends itself very well to parallel computing, as there are a lot of calculations that don't depend on the results of others.

    Rather than go through each transaction sequentially, work can be divided up and farmed out to the graphics card and the length of the task is shorter by orders of magnitude.

  5. Nigel 11

    One way to think parallel

    One way to think in parallel is a spreadsheet!

    Behind the scenes, a spreadsheet keeps a list (graph) of which cells depend on which other cells. Provided the dependencies are processed in the right order, you can recalculate each cell in parallel with many others. LibreOffice has the use of GPGPUs for calculation as a future goal.

    Of course, most actual spreadsheets are not large enough to keep more than a single core significantly busy, but one might "code" for parallel execution using a program design methodology that looks a lot like a spreadsheet and it's underlying, automatically generated, dependency graph.

    More generally, the problem is software. At present we do parallel coding mostly using sequential languages and the spawning of multiple execution threads. Other approaches are needed, with the computer doing the parallelisation, not the human programmer.

    The irony is that nature worked it out a very long time ago. A brain is a parallel processing system par excellence, 10^11 processors operating asynchronously at something like 20Hz. But it's possible for an evolved system to be beyond the understanding of that system, and so far even insect brains seem to be beyond our ken.

This topic is closed for new posts.

Other stories you might like