back to article Line up for parallelism

While doing some research for another project I came across an arguably old idea being re-visited - or perhaps that should read "disinterred". Parallel processing is, a growing number of people believe, not just the way of the future but the only way real development progress is likely to be made in future. Parallelism is not …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    cc forloop.c -p

    Back in the days of VAX C.... paralel proraming was just simple.

    say you have the algorythm:

    for i=1 to 10

    a[i] = b[i] + c[i]

    next

    just compile it with -p for paralelism eg.

    cc forloop.c -p

    What hapens next is:

    time=1

    i=1 runs on 1st processor

    i=2 runs on 2nd

    i=3 runs on 3rd

    i=4 runs on 4th

    time=2

    i=5 runs on 1st processor

    i=6 runs on 2nd

    i=7 runs on 3rd

    i=8 runs on 4th

    on a 4 processor system that is. The VAX C compiler was able to automatically identify wich loops can be run in paralel and do the processor shufling :-)

  2. Anonymous Coward
    Anonymous Coward

    Nice job if you can get it.

    Multi-threading is one of my main areas of expertise. I do lock-free algorithms for a hobby because I don't consider conventional threaded programming difficult enough to make it interesting. Unfortunately, as far a finding work is concerned, having this skill is more of a liability than an asset. You start mentioning things like race conditions or forward progress guarantees and it's the same as lunatic babbling to the hiring manager who has a minimal understanding of threading at best.

    If you're looking for hot programming skills that will get you work, web programming is still a better bet than parallel or multi-threaded programming.

  3. Kyle Stapp

    It's comming and no-ones gonna stop it

    I see huge benefits in the terms of specialised computing. Multi-core and super-multi-core solutions are amazingly well fit for scalable computing algorithms. Obviously, not all computing algorithms are well suited for multi and super-multi-core scaling but those that are will flourish like no other when the multi core phenomenon matures.

    One great example is raytracing as opposed to raster graphics. A much better method of graphics rendering that will hit hard with the advent of many cores.

    Other algorithms involving work that can be parallelised will become huge.

    Hardware can't do it alone though and there will have to be deliberated thought put forth to using multi-core technology when programming to make use of scalable algorithms and solutions.

  4. Karl Lattimer

    Threading vs. Parallelism vs. Generalisation

    Firstly, how the hell can a vax C compiler figure out what to do on which processors. Take for example the following snippet.

    for (i = 1; i < 6; i++) {

    c = b * 5;

    b = i;

    printf ("%s\n", c);

    }

    Run that through a VAX C compiler, if it were as simple as stated in the above comment we'd have enormous massively parallel computers already.

    The problem with that kind of parallelism is that if the next iteration is dependant on the previous it all gets broken. Maybe this explains why we don't see vax computers on sale on the high street.

    Talking about element management software exceeding the ability for the system to compute the process that is running in parallel and the management of the task is simply rediculous. Where massive parallelism is required to the extent that you're management software is taking up too much time then the solution is simple. Don't use any bleedin' management software!

    Take as an example a computer we all know, it has a massively parallel design, it is constructed of billions of processing elements which are so simple and yet can process far more powerful tasks than a CrayX1 can today.

    We all have one, its our brain.

    What makes this computer special isn't simply its neurological design but more importantly its ability to distribute processing via lots of small adaptive tasks which have the ability to hypothesise what they are supposed to be doing based by what they are capable of doing and what data is provided. Couple that with the supportive neurotransmitter logic and influence to dendridic communication and you have a machine far more powerful than the old arithmetic and logic unit's joined together in parallel.

    an ALU has importance in that we need to be able to do mathematic calculation quickly, this is the first requirement of a computer and was the only reason for the difference engine. Couple the technology in existence in a ordinary CPU, the networked routing technology of large parallel processors today, and generalise!

    You take the loose logic and chaotic communication of a neural network design, couple that with very fast math processing elements allow data to simply be routed to not processor ID x of y, but instead think around the CPU addressing issue and just say, send this to a chip, get a response then fire the result off to another chip. You simplify the design by having the blocks of processing data sent out into a wide blue yonder of processors which have a private memory space and a shared memory space, the only issue left to worry about is the locking problem but by using shared memory in write and read mode and have only read after write and write after read and various other logical memory management systems you basically achieve something which is akin to a cross between a hardware silicone neural network and a classical computational system. You can even have flexible processor addressing influence the memory management so 5 processors in the cluster may read and write a certain area of memory asyncronously but by giving your process block an ID which is essentially a memory address to inform which chip is running the code you can have write memory, wait until processor dealing with block xxx reads the memory and write to that area again wait until processor yyy reads _that_ memory and write to ... so on and so forth...

    Managing the processors no longer takes any time up, except the time to wait, which is required to avoid locking. You can't simply compile a program to do this, but you can extend existing threading models to respect the design of the system.

    Its just an idea i've been playing with for a few years, never gonna pursue it though. I don't earn enough money on my meagre income ;P

    K,

  5. John Airey

    Yeah but, no but

    Parallel computing is a brilliant idea let down by one tiny problem. Humans. The trouble is we may have this really amazing brain (although you do meet people from time to time that cause you to question this) but we think in serial.

    I used to do OCCAM programming of the Transputer. My head hurt trying to parallelise computations. In fact I'm probably still traumatised from the effort now, and that's about ten years later.

    So unless we can raise a breed of programmers who think in parallel faultlessly then parallelism is pretty much doomed. We can't even get programmers who can write a perfect program in serial. Imagine the damage they could do in parallel!

    I guess we could make a machine to do it, but how would we program it? Perhaps we should just build Skynet and have done with it?

  6. David Norfolk

    Interesting, but

    Yes, as people have pointed out, managers don't "get" parallelism, and race conditions etc are a fruitful source of interesting production "issues". And Amdahl's law still applies - a serial bottleneck usually limits the potential throughput from parallelism (OK, them's my words).

    But good programming style (see Dominic Connor and Kevlin Henney, passim) is still about programming small cohesive units of work with minimal coupling - that systems programs can parallise. And an RDBMS can usefully parallise database access through exploiting one of the few genius programmers that can think in parrallel - and who IBM, say, can afford to pay...

    Just don't expect ordinary programmers to design and code massively parallel systems. Even multi-threading will probably give rise to more than its fair share of bugs.

This topic is closed for new posts.