back to article Intel juices parallel programming for the masses

A year after open sourcing its cross-platform Threaded Building Blocks (TBB), Intel has released what it called a "significant" upgrade to is parallel programming template library. Intel used the O'Reilly Open Source Convention (OSCON) to announce TBB 2.1, with improvements for building - among other things - graphical user …

COMMENTS

This topic is closed for new posts.
  1. E

    Hot Air

    Library is not especially valuable unless the programmer knows what & how to thread in the first place. In which case pthreads / winthreads are perfectly useful.

  2. Anonymous Coward
    Thumb Down

    parallel processing for the masses is an oxymoron

    Let's be honest here: the average programmer solving business problems will never have the expertise to manage parallelism -- nor should they. We're trying to abstract them from the hardware, not entangle them in it.

    Multicore processors should be optimized by the OS, the middleware, and possibly the compilers/VMs, using code written by a small cadre of experts, just like we did in the past for multiprocessor boxes, SMP, distributed systems, specialized processors such as GPUs and FPUs and -- go back far enough -- I/O channel handlers. What's wrong with, say, a JEE server allocating EJBs on a per-core basis transparently to the application code just like it does today on multi-VM or multiprocessor systems? You'll probably get nearly all of the potential benefit for a tiny fraction of the disruption.

    Every time some new hardware innovation comes along we get this same song: "ohmigod, how will mainstream programmers learn to optimize for this?". The answer is, they don't. That's what the systems software is for.

  3. Anonymous Coward
    Boffin

    Threads: tomorrow's technology, and has been for decades

    Nothing new under the Sun (or DEC, or HP, or IBM, or even SGI).

    The closest Joe Public's PC will ever get to multiple parallel activities is printing a document while surfing the Internerd while a virus scan is running while FindFast is indexing while the SETI screensaver is computing. And that already works, after a fashion, without any extraclever threadedness (needing extraclever programmers) in the run of the mill apps.

    In the world of enterprise-class software, real programmers are already doing this SMP-oriented stuff and don't need vendor-specific help from Intel. When did pthreads come out? and before that, when was Parallel Fortran ? Where were Intel back then, and why has it taken them till now to stop being obsessed with MHz and GHz?

    zzzzz....

  4. Anonymous Coward
    Unhappy

    TBB isn't worth it

    I have done a pretty extensive review of TBB from Intel, and it all looks quite interesting until you start actually trying to use it. Some parts are useful like the concurrent hash map, but most of it is only a marginal difference to conventional tools. The reality of it is is that multi-threading is hard, because to see any real improvement over single threads you have to get the design right, and TBB doesn't help with the design. Software simians chimping out generic data centre code who use this will fine no noticeable difference in performance resulting. The real experts out there will find some of the tools useful, but will be able to do everything they need with existing tools like pthreads or boost.threads. I suspect most of them will carry on using the tools they are most familiar with.

  5. amanfromMars Silver badge
    Jobs Horns

    When is a Game more than just a Game

    "Multicore processors should be optimized by the OS, the middleware, and possibly the compilers/VMs, using code written by a small cadre of experts,..." .... By Anonymous Coward Posted Wednesday 23rd July 2008 13:05 GMT

    AC,

    That would be better written ... Multicore processors should be optimized for the OS, the middleware, and possibly the compilers/VMs, using code written by a small cadre of experts ...... which is Ethical Hack Crack Code Red Team Territory in Systems Shakedowns and ShutDowns. Friendly Virtual Fire at Compromised Systems to Justify Repair to Fitness for Purpose Systems..... which will be AI System ReBuilds in the Cloud ... and a whole New Transparent Web Interface in and for Virtual Control of the Great Game.

    With the Internet being the New Transparent Web Interface does OS and Hardware play second fiddle to Content Shared, for IntelAIgently Designed Content leads Man's Evoution in the Present Digital Binary Environment.

    The Paradigm Shift which has caught Money with no Power in ITs Control. Although IT can be Bought but you only Get what you Pay for, so presumably Pay the Best for the Best or Suffer Sub-Prime all the Time.

    And the Steve icon because his spark is fading.

  6. Anonymous Coward
    Coat

    The SPARC is fading

    Amanfromarse, if anyone's SPARC is fading, it's yourn.

    Have a nice day. Preferably somewhere else, somewhere other than here, anyone with a clue should be able to find the other places you hang out, I'm not going to give you the undeserved publicity.

    Mine's the one with the nice leather wrist straps, tied round the back. You can have it back soon.

  7. Kurt Guntheroth

    if only my tasks were parallel

    If I could just type all the letters of this message at one time, I could exploit the parallelism in my multicore chip. The sad fact that Intel doesn't want to hear is that a great majority of tasks are serial. Sure you can embed brilliant bits of parallism into searches and such, and a few tasks can run in parallel, like page updating and networking, but honestly, it's a serial world.

    That means Intel won't be able to charge more and more each year for chips unless it finds a way to make uniprocessor performance continue to increase. Sucks for them, great for us.

  8. Louis Savain

    The Professor vs. the Thread Monkeys

    The strange thing about Intel's addiction to multithreading is that Professor Edward Lee, the head of UC Berkeley's Parallel Computing Lab, which is partially funded by Intel and Microsoft (another infamous bastion of thread monkeys), is known for rejecting threads as a viable solution to parallel programming. The Professor made big news a couple of years ago when he published his now famous "The Problem with Threads".

    One is left to wonder, will Professor Lee cave in to the thread monkeys (like the gutless cowards at Stanford's Pervasive Parallelism Lab) or will he have enough huevos to tell them in their faces that they're full of crap? Time will tell.

    UC Berkeley’s Edward Lee: A Breath of Fresh Air

    http://rebelscience.blogspot.com/2008/07/uc-berkeleys-edward-lee-breath-of-fresh.html

  9. Jim Cownie

    Misconceptions

    Getting the disclaimers out of the way first. I work for Intel and know the team working on TBB, so you're completely free to believe that I'm biased and ignore me completely.

    There are some misconceptions in other comments which I can't let pass.

    1) TBB adds nothing which you can't do with threads.

    In a mathematical sense this is blatantly true, since TBB is itself built on threads. However it misses the point that TBB provides a major advantage over "raw" threads in that when you use TBB you think about how to split your problem into tasks and then let TBB choose how to execute the tasks for best effect on the current machine. You don't have to worry about how many threads to create, or how to distribute work to them. (Indeed you shouldn't even know how many threads are being used). Instead you focus on your problem and let TBB handle all of that complexity.

    2) "Don't need vendor-specific help from Intel"

    TBB is Open-Source (under one of the GPL licenses). Ports already exist for AIX/Power, Sun/SPARC, Cell etc. So "vendor specific" is just plain wrong.

    3) Threads are a bad programming model

    I agree, but given where we are, what do you suggest?

    We all stop writing code for ten years while the academics sort it out? We all switch to Erlang? Occam? Prolog (which can be executed in parallel without requiring user input)?

    The reality is that we have the machines we have. People are writing threaded code now and will continue to do so. TBB is an attempt to make that easier for them by lifting the level at which they do it, making it easier for them to get it right and have codes which scale as more hardware resources become available. We don't claim it's the ultimate solution (heck, it's in C++ and that's not the ultimate solution either :-). However it is available now, easy to add to existing code and provides simplifications and significant benefits over raw threads.

    -- Jim

  10. BlueGreen

    @Louis Savain

    You still don't get it do you?

    Threads are an efficient means of exposing potentially higher performance from processors with ever-increasing transistor budgets but per se are neither good nor bad; treating threads as a programming *model* is arguably bad. Hiding them away behind more sophisticated *models* is arguably much better. The threads are still there and being used, just in a controlled way.

    You still haven't understood the difference between a *model* and an *implementation*, which is what that paper (thanks for that reference) was all about. To quote from the paper which you clearly haven't read (why should you, after all it was written by one of your reviled academics):

    "The implementations shown in figures 3 and 4 both use Java threads. However, the programmer’s model is not one of threads at all."

    And you could be a little more polite what with monkey comparisons and accusations that someone's 'full of crap'.

    When your S2N improves I'll start looking forward to your posts.

  11. Anonymous Coward
    Boffin

    Hey Mr Intel: "we are where we are"

    So, where is that then?

    Going the threads route, or the EPIC (Itanic) route? 'Cos they're really quite different aren't they (if they weren't different, there'd have been no technical need for EPIC, right?)

  12. BlueGreen

    @Hey Mr Intel: "we are where we are"

    Hey Louis (cos I'm sure I recognise that cocky yet clueless style). They are different hence complementary. EPIC was instruction-level parallelism, threading is process-level parallelism. Entirely disjoint and orthogonal concepts.

  13. Louis Savain

    @BlueGreen

    Why be a Wintel anonymous coward? You are a gutless thread monkey and you're full of crap, that's all. See you around.

  14. Louis Savain

    @Jim Crownie

    "I agree, but given where we are, what do you suggest?"

    Jim, I suggest we do away with threads altogether and there is no need to wait for the academics to sort it out. The academics are the ones who gave us the damn threads in the first place. However, it's good to see that not all academics are clueless arse kissers. UC Berkeley's Professor Edward Lee is a case in point.

    There is a simple and infinitely better way to implement parallelism that does not involve threads at all. The only thing is that it will require a reinvention of the computer as we know it and a repudiation of the Turing machine as a viable computing model. This is the future of computing. Whether you or Intel like it or not, there is no stopping it.

    Read "Parallel Computing: Why the Future Is Non-algorithmic" at the link below to find out why multithreading is not part of ther future of computing.

    http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

  15. Anonymous Coward
    Anonymous Coward

    @BlueGreen: "different yet complementary"

    Hey BlueGreen, Louis didn't write the question you replied to (though he's been here since), I did.

    But thanks for the compliment, and for showing that you have no real answer to the question I asked.

  16. BlueGreen

    @AC: @BlueGreen: "different yet complementary"

    Then I must apologise to you.

    Nonetheless, I thought I answered the question, so I'll try again.

    Modern processors can execute multiple instructions in parallel. They do this by examining the instructions stream and where possible, packing multiple instructions into a single box (like eggs in an egg carton) and then executing those instructions in parallel. All instructions in a box are executed together and it takes as long as the longest instruction and clearly those instructions must calculate unrelated results; you can't have instruction A in a box relying on instruction B in the same box. You also regularly have empty holes in the box which you can't safely fill.

    Reordering and packing instructions this way *on the fly* has to be done at incredible speed. It makes sense to actually do this at compile time instead, which allows you to lose the complex and ultra-high-speed circuitry, and hopefully allow a bigger box more efficiently filled. This is the old idea of VLIW, which is what the (failed) EPIC processor was a type of. It allows a single thread to theoretically run faster. However, even with all that, it has its limits - you can rarely find more than a handful instructions to fit per box, so speedup is limited [*]

    Threads - well, they work at the task/process level. You can have/not have threads with/without VLIW, in any combination. They are orthogonal concepts. Personally I guess both would be nice but intel fluffed it despite decades of VLIW research. Pity.

    [*] even processors that dynamically reorder instructions on the fly like to have instructions somewhat pre-boxed, for easy recognition.

This topic is closed for new posts.

Other stories you might like