back to article Intel rallies rivals on parallel programming education

Intel has enlisted chip rivals to push for making parallel programming a higher priority on computer science courses. Intel will kick-off its campaign at Supercomputing 08 in Austin, Texas next week, during a Monday session called There Is No More Sequential Programming. Why Are We Still Teaching It?. Representatives from AMD …

COMMENTS

This topic is closed for new posts.
  1. John Savard

    Sequential Programming is not just basic

    it's also more efficient in terms of total resource consumption. For some problems, there are ways to program them in parallel, but with the use of more total CPU cycles. Which is entirely appropriate to get one's answer sooner, when the other unused parallel units would be sitting idle.

    So on a shared computer, one would avoid such algorithms.

    In other cases, there is no loss of efficiency; in that case, one tries to exploit the available parallelism by breaking up the program into as many pieces as can execute in parallel. Each of those pieces is... a sequential program.

    But the idea of 'not teaching' sequential programming, although silly when taken literally, can still mean something that does make sense: to avoid teaching bad habits, introduce looking for potential parallelism very early in teaching programming. So just phrasing things correctly could avoid people arguing and taking sides.

  2. Oscar

    Multiple execution units?

    Can multi-processor systems not be set up to look like they, simply, have many more execution units to provide the kind of parallelism that most sequential programmers are already used to?

  3. Anonymous Coward
    Anonymous Coward

    @Oscar

    I don't understand your suggestion. What kind of parallelism is it that most sequential programmers are used to, because AFAIKS sequential means just that - one execution unit, not multiplexed in any way visible to the programmer. Instructions execute in order one at a time (or look exactly as if they do).

    What are you getting at?

  4. Bassey

    Tricky one

    I've be interested to read an in-depth opinion piece on this. As an old-fashioned sequential programmer myself, I can't help but feel that you need to learn to create basic programming structures before you go off and start programming in parrallel. There are also functions which, quite frankly, need to be programmed sequentially.

    Also, I can't help feeling that learning to program in parallel is a bit of a waste of time. Surely it won't be long before JIT compilers are breaking our code up for us and making the best use of the resources available.

  5. Pascal Monett Silver badge

    @John Savard

    Now look what you've done.

    Your intelligent, thoughtful post has totally quenched any budding troll-based flamewar that could have made this comment section a good read and a hearty chuckle.

    Instead, I actually have something to think about.

  6. Louis Savain

    Multithreading Is a Monumental Mistake

    "Sequential is so over" but so is multithreading. Unfortunately, nobody at Intel, Microsoft and AMD seems to have received the news. Multithreading is the reason for the parallel programming crisis. It is not the solution. It bothers me to no end that the main players in parallel programming and multicore technologies have not learned the lessons of the last three decades. Their choice of multithreading as the de facto parallel programming model is a monumental mistake that will come back to haunt them. There is nothing wrong with making a mistake but forcing your entire customer base to switch to a hopelessly flawed computing model that they will eventually have to abandon is not something that will be easily forgotten or even forgiven. Many billions of dollars will be wasted as a result.

    There is an infinitely better way to design and program parallel computers that does not involve the use of threads at all. The industry has chosen to ignore it because the baby boomers who started the computer revolution are still in charge and they have run out of new ideas. Their brains are stuck in 20th century mode. Indeed, the parallel programming crisis is their doing. The industry needs a change of guard and they need it desperately.

    How to Solve the Parallel Programming Crisis:

    http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

    See also:

    http://www.theregister.co.uk/2008/06/06/gates_knuth_parallel/

  7. Vince
    Unhappy

    so...

    the summary is, the chip companies aren't smart enough to make computers faster anymore. So their solution is to make them harder to program. Excellent.

  8. RRRoamer

    I remember the same sort of issues when object oriented languages where coming out

    I can remember back when C++ started to make in roads against ANSI C. You had three camps: 1) C rules and objects are for wimps that can't keep track of their pointers, 2) Objects are the future and that "old school" coding is dying fast, so dump it quick before you start to stink too, and 3) Some things object programming is MUCH better, some things "old school" is faster and quicker.

    It was kind of funny watching the battles. You would get some C++ snob that would figure out a way to use 20 pages of code and a dozen objects to implement the "Hello, World" program. Then the C guys would end up duplicating 80% of their code a dozen times over just to avoid the possibility of using C++ and an object. And yes, these are exaggerated examples!!!

    It looks like we are there again. We DO have to develop a better way to program the new, complex CPUs. I suspect that someone will end up developing a new way to program things that will assist people in making modular code that can easily synchronize with all the other modules. Of course, I don't have CLUE how to actually do it, but I sure do hope someone somewhere DOES have a clue!

  9. storng.bare.durid
    Thumb Up

    @Pascal Monett

    Damn..

    /agree

  10. Louis Savain

    @John Savard

    You wrote, "But the idea of 'not teaching' sequential programming, although silly when taken literally, can still mean something that does make sense: to avoid teaching bad habits, introduce looking for potential parallelism very early in teaching programming."

    We will not solve the parallel programming crisis until we stop looking for parallelism in our sequential programs. A day will come when we will, instead, look for sequences in our parallel programs. That is to say, parallelism will be implicit and sequences will be explicit. Until then, we are just pissing in the dark. Just a thought.

  11. Joe Cooper

    @Louis Savain

    "This approach is ideal for graphical programming and the use of plug-compatible components. Just drag them and drop them, and they connect themselves automatically. This will open up programming to a huge number of people that were heretofore excluded."

    Bwahahhahahahahhah!

    This is such an old sales pitch. How many times have we all heard this one, seriously?

    Even Java was going to do this... Like, 10 years ago. And now rejigging the way parallelism works is going to do it?

    Nobody who spouts such obvious, insane, stupid crap should be trusted.

  12. Anonymous Coward
    Anonymous Coward

    Cooey

    the answer is in the functional programming languages, the lisp strain is about to get its day.

    The thing to realise about threads is that a process is wrapped around a thread.

    You cannot get rid of threads really, they are the bases of how a program executes.

    A process wraps the thread and acts as a shield with which to run the thread through. Now when you add extra threads into a process you create problems, it is that simple, the problems are things like race conditions, non determinable results if the architecture changes etc etc. They are quite fundamental problems, and they exist at the design level.

    Concurrency via lite weight process, state machines, functional style, and interprocess communication is probably going to be the winner here it is Erlang, Haskell that should emerge as the next gen languages. Though python has a multiprocess module just recently released, but it will come down to style, you will have to code for concurrency not expect the compiler or environment to work it out.

  13. Ze

    @Louis Savain

    The concept of a global clock with double buffering just doesn't cut it. Global clocks are slow. Why should one part run slow if I can run other parts faster? Then you've got register/cache/memory speed issues. If we adopt your solution we end up running at the speed of the slowest *possible* bottleneck instead of the slowest bottleneck.

    On the hardware front I reckon we'll end up with a bunch of non-homogoneous cores with homogeneous instruction sets running on a fast IO interconnect.

    On the software front we'll end up with some form of multi-threading/multi-process using either NUMA shared memory or Message Parsing. Developers will just have to get used to the fact that programming is hard and and that the things you learnt in your Computer Science degree are actually useful.

    BTW the sure sign of a kook is when they say algorithms are dead then present another algorithm.

This topic is closed for new posts.

Other stories you might like