back to article Multi-threaded development joins Gates as yesterday's man

When he wasn’t ruminating at this week’s TechEd on the “millions” of servers running Microsoft’s planned on-demand services, Bill Gates was talking about how to architect software to take advantage of powerful “transistors” in massive server farms. Applications - like, for example, Microsoft’s BizTalk Server and SQL Server …


This topic is closed for new posts.
  1. Steve Kellett

    Some strange assumptions going on here

    Anyone remember ICL's Goldrush machine of the early 90's?

    Nope. Thought not.

    Multi-threading a process is difficult. I used to work on the internals of a Mainframe DBMS and I'd guess that 40-60% of the code in there was to cope with synchronising shared resources, acquiring resources, waiting for resources to become available, recovering from situations when they failed to become available, releasing them when you'd finished with them, etc.

    Surely parallelism should only be used when it is logically possible to break down a given unit of work into sub-units that can be executed with zero or minimal overlap in the required resources? Otherwise there's going to be all sorts of horrendous "Flush the pipe, I'm waiting for someone on another processor node" type interrupts flying around the system. (Been there. Done that. Had to go and patch several dozen instances of a particular order code instruction from the variant that stopped all the nodes in a cluster to make them synchronise their internal clocks to the one that didn't). You stat to loose the power boost you were looking to gain in the first place.

    Surely the way to utilise multi-processor systems is to throw mixed workloads at 'em?

    You know, like we used to do with multi-node Mainframes?


  2. a walker

    Lessons not learnt

    As a young programmer an opportunity arose to work with the Transputer and Occam which was a challenging but enjoyable experience. The core difference between the Occam approach and using a language like C++ is that Occam allows each Processing Unit to operate independantly unlike C++. The best analogy that was explained to me is as follows.

    "Imagine an array of interlocking gears, as one gear turns the rest move in lock step, a problem with one gear and they all stop....." this is programming in C++

    "Imagine the same array of interlocking gears but with one difference, each gear can engage and disengage with respect to its neighbours.... so that parts of the array can operate at different rates..." this is programming in Occam

  3. Anonymous Coward

    The Photo

    Clearly shows that Bill is trying to get a job as The Register's Artistic Director of Playmobil re-creations.

    Mine's the one with the mitts on strings.

  4. Louis Savain


    Thanks for writing this article and for linking to my blog. It is not just multithreading that is evil. Multithreading came from single threading, which is the real root of all that is evil in computing. I wrote a recent post called, "Parallel Computing: Why the Future Is Non-Algorithmic" to explain the problem. It's a very old problem that started with Babbage and Ada and got institutionalized with our infatuation with Turing and the Turing computability model.

    It is time to say goodbye and good riddance to threads. There is a much better way to do deterministic parallel computing without threads. Orders of magnitude better. It is the way it should have been in the first place, even with single core processors. We would not be in the mess that we are in right now if we had started with the correct computing model. I hereby call all programmers to refuse to use threads and to put pressure on the processor vendors to abandon their evils ways and do the right thing.

  5. Max

    Worked for me,,,

    When I got my first multicore I started playing with the threading trying get both cores to run my apps, and I didn't seem to have any problems getting the most from my processor(s), and I am only using c# 3.5. This and the System.Threading stuff places me squarely in camp A (let the tools worry about it). However, I am mostly doing one way-data processing without really having to worry about synch or race conditions or other such nastiness. But I don't think it would be all that difficult, as c# is cake and allows me to focus on the end result. I used to love coding in c++ but I will admit I probably would not enjoy dealing with the details of getting it to run parallel threads on multiple processors.

    There will always be purist coders who need to control all the nitty gritty, and there those who left that lifestyle behind to embrace the machines doing most of the work...which in my opinion is why we invented the things in the first place.

  6. amanfromMars Silver badge

    Who answers the MS Door whenever the Gates are closed? Who speaks with their Vision?

    "As we are discovering on multi threading, what was once considered "right" can fall out of favor and be branded as "wrong".".... Invariably branded as "wrong" though by those who cannot get it right because it is so different ..... Thinking like a Sophisticated Binary Device working Multiple Threads in Multiple Cores without Conflict..

    And this caused me a wry smile ....... "The simplest way to achieve more power is to increase the number of processors with multi-core chips, clusters of processors or networked grids." ..... for it is no different from Network InterNetworking RobotIQs which puts Systems Analysts in the Software Programmers Seat Driving SMARTer Hardenedware.

    If the Core Operating Systems Value[s] is[are] predicated upon returning Stealth Wealth for Company Profit rather than Sharing Benefit for Social Upward Mobility, then it is inevitable that increased Division will bring down the System with Inequity and increasingly Punitive although ever more easily Compromised Security Blunders, being signs of Terminal Breakdown.

    Without a Viable Alternative Backup System which reduces and addresses the Inequity by Spending the Wealth for Social Upward Mobility, will a Catastrophic Systemic Wealth Meltdown and Transfer Occur out of the Mainstream Staid and Stagnant Sector and into Alternative Investment Markets which do not render Divisive XSSive Capitalism along with its Inability to Spend Accumulated/Accrued/Spun Wealth, for if the Markets are an Invention in League with Money Supply to Channel Funds to Future Innovation, then they are clearly failing and long overdue for the Crunch whenever they sink to Sub-Prime Crime and Debt heralded as Credit as Tools of their Trade. And as there has been no Fundamental Change to the ludicrous Profitting Capitalist Model, Catastrophic Meltdown/Alternative Market Funds Transfers are Assured.

  7. Christian Berger Silver badge

    Architectural problems in software

    I don't think there are fundamental problems in multithreated hardware. 4 cores can easily be used by a typical system by the typical background jobs.

    As for actual increases in speed, I would recommend to stop using such brain-dead languages like C++. (Who in their right mind makes object copying an integral part of their language?)

    I believe that we should invest more resources in propper programming languages. After all, other languages have already shown that they can easily scale up to thousands of processors. Just think of StarLISP on the Connection Machines.

    If people manage to write (mostly) working C++ compilers, it should be trivial to write a highly optimized Erlang compiler.

  8. Anonymous Coward


    As an amateur developer who has written multithreaded code on both Windows (C#/.NET) and OSX platforms, I found the best way to write efficient code was to think about what I was trying to make the hardware do. Also, I'd consider any limitations of the hardware that may cause problems with the code I was trying to write. Once I got my head around this approach to programming, writing a multithreaded process became relatively straightforward.

  9. Simon

    Please wake me up when they have all given up whining...

    Writing asynchronous multithreaded code is extremely easy if you use the correct tools, and it's illogical to look at the hardware as flawed - if anything it needs more cores. Lots more.

    I fully hope that processors start to get a thousand plus-cores, that is the only way they can become truly efficient at solving tasks independently.

    Perhaps if the older style developers got their fingers out and switched to .NET/Java or a modern scripting language and learnt the work-item distribution features built into those they would be less petty at complaining about the hardware having an order of magnitude performance increase.

    Split the task into items of work where you pass data in, and get data out and you can even avoid needing any locking and simply scale up by starting more tasks at the same time. So it takes a little effort in C++ to achieve the same thing, well pooh pooh to you; if you're unwilling to put the effort in to develop a neat solution to efficiently do it then you don't deserve to develop software.

    Parallelism is the way forward - without it you cannot hope to compete with biological systems on a performance front, the human brain is after all a massively parallel system with millions of neurons that are fully independent computers.

    The sheer number of tasks that really REQUIRE SMP to achieve acceptable performance or at least need multithreading are amazing:-

    - Image recognition

    - Audio recognition

    - Multi-input handling

    - Database processing (information retrieval and storage shouldn't be held up by one task, and can easily scale up with more processors!)

    - Image generation (yes, games!)

    - Audio generation (multiple SFX at the same time?)

    - Compression and decompression algorithms

    - Encryption and decryption algorithms

    - Anything doing maths.

    Even user-interface tasks should all be multi-threaded. The end user doesn't want to wait whilst their software goes off to a server. The end user wants that to happen in the background! Now they want to make it also do something ELSE in the background and the hardware manufacturers multi-core technology actually comes into play.

    Every application I have written since 2002 has been multithreaded using various techniques (I am particularly partial to asynchronous delegates in C# but there are equivalent technologies available).

    I think it is short sighted for people like Donald Knuth to say things like "Multi-threading is bad". And frankly it makes him irrelevant for tomorrow, and I fully intend to give his books away to a charity shop with a warning stencilled on them in big red letters.

  10. James Anderson

    You can multithread in most languages !

    @By a walker

    Its relatively easy to write multithreaded programs in C using the POSIX threading and shared memory libraries -- the same applies to C++ where in addition to posiwx there a a number of Threads++ classes.

    PL/1 had threading builtin, most "real" languages have some POSIX threads implementation available, the more advanced scripting languages such as perl have access to POSIX multi-threading or like Python have multi-threading built in. Java has built in support for multi-threading although it is a "less than POSIX" implementation. z/OS assemmbler has had its own "sub tasking" facilities since the year dot.

    So multithreading has been around since the early seventies. It is not new and the "START", "JOIN" paradigm combinined with the "LOCK", WAIT" semiphore synchronisation which was also locjed down in the seventies give you all the facilities required.

    The reason most programmers do not come across this is it has been "already done by someone much better than you". Programmers multithread all the time without knowing it -- either they are in a container managed by J2EE, CICS Tuxedo etc. or they are using an RDMS or Window manager which takes care of all the tricky bits for them.

    I would however agree with Knuth (I would question my sanity if I didnt) that the way forward is not the bit twidling involved in shared multithreading and shared memory which after all limits you to a single box; rather the MAP/REDUCE, cloud, grid or whatever the distributed processing approach is called these days, this gives you access the processing power of n boxes whereas multithreading gives you access to the porcessing power of "< 1" box.

  11. amanfromMars Silver badge



    I couldn't find in your post ... Posted Friday 6th June 2008 06:26 GMT ... an alternative to threads/multithreading.... and presumably "the correct computing model".

    And the root of all that is evil ..... "Multithreading came from single threading, which is the real root of all that is evil in computing." ..... is surely the thread content and not actual threads/multithreads themselves per se.

    Talk of war and you are a warmonger, talk of Peace and you are a Peacemaker. Keeping IT So Simple allows for even the the most Complex of MultiThreaded Exercises to be Seen for what they are and who and what they are Servering to.

  12. Kevin Whitefoot

    Threads are horrible.

    Much better to use CSP. But how many programmers have even heard of CSP and Tony Hoare. You don't need a Transputer to do CSP. I made a toy implementation of the Occam PAR, ALT an SEQ statements (based on articles in Dr Dobb's Journal) in Turbo Pascal and 80286 assembler for a 12MHz IBM PC many years ago. It worked perfectly. Had I been able to use it for work I think it would have saved me a lot of trouble but corporate standards force programmers to use corporate tools and these are chosen for quite other reasons.

    Also in many cases it is simpler and cheaper just to spawn a new instance of an executable with arguments on a command line and just let it do its stuff. Many synchronization problems are caused by an unnecessary desire to remain in control and receive pointless feedback from every sub-process.

    I've been through this kind of thing in PLC programming where every contactor is equipped with a signal contact. You can make the system depend on getting a signal to say that the contactor has closed or opened but very often it's pointless because there is nothing the controlling program can do to recover from the error.

    Of course there are safety critical interlocks that need to work but anyone programming those using threads needs shooting (or interrupts, another evil).

  13. Darren B

    Bill Gates brought to you by

    Diet Coke.

  14. Robin

    re: Lessons not learnt

    "each gear can engage and disengage with respect to its neighbours"

    So the engine revs away wildly, as the car gently coasts to a stop?

  15. Anonymous Coward
    Anonymous Coward

    "... do the right thing"

    non-imperative programming was easy - follow the machine's native style. Doing it otherwise requires good tools and mathematical formalisms. I endorse this but it's very difficult to do the latter, and good tools? ach, I remember wrestling with early C++ compilers which couldn't compile correctly. While we tolerate shite, we get more shite. People in general seem very, very tolerant of shite.

    You can call all day to abandon our current programming paradigm but while we still have programmers that feel compiler warnings "just get in the way" and turn them off, and people who can't even use procedures correctly (never mind objects etc) and think java is good, things won't change.

    Inertia, stupidity, lack of tools, attitudes of "well it #seems# to work so what are you complaining about?". It'll take a bomb to shift that lot of accreted rubble.

    As for your comment "the way it should have been in the first place", offer something *specific* *with* *tools* to back it up. It's easy to criticise.

    FYI here's a good summary from an expert in the field consulting other experts:

    <>. It's very good, and here's a lovely extract that puts my view concisely:


    High end computing is an ecosystem. Platforms, software, institutions, applications, and people who solve supercomputing applications can be thought of collectively as an ecosystem. Research investment in HPC should be informed by the ecosystem point of view - progress must come on a broad front of interrelated technologies, rather than in the form of individual breakthroughs. Hence, even the perfect parallel programming language will not succeed, if not at the time of its introduction also effective compilers are available on a wide range of platforms, [...]


    Back of the net.

    Another problem is the separation of academics from industry - managers see anything remotely different as scary, and academics in general make little attempt to actually deal with the realities of programming-in-the-large-and-dirty - I'd love to use haskell but..

  16. Dave

    Parallel Processing?

    Isn't this what they used to do with Transputers back in the good old Inmos days? 20 years ahead of the field...Perhaps someone will re-invent Occam?

  17. Paul Clark
    Paris Hilton

    Strange anachronism

    This all has the sense of a lot of greybeards musing over a theoretical problem which went away 10 years ago in the real world. In commercial development all serious Web and client-server development platforms (Web servers; databases etc.) are already multithreading, and application developers are already using multithreading without even being aware of it.

    In non-trivial desktop applications, embedded and systems-level development, the need for synchronisation etc. never went away, and although C++ doesn't have synchronisation primitives, most people either use Boost or have their own wrapper around pthreads.

    Just about the only high-volume area which I can think of where people might need to think a little harder is in areas which need serious grunt like video codecs - but these are often precisely the easiest things to decompose into parallel execution.

    [Paris, because she lives in a parallel universe]

  18. Ken Hagan Gold badge

    Hardware is irrelevant

    "I hereby call all programmers to refuse to use threads and to put pressure on the processor vendors to abandon their evils ways and do the right thing."

    If you know of a way to break a problem into such independent pieces, the hardware vendors already have the platform for you. It's called a cluster, and a fair measure of how good you are at breaking up is how slow an interconnect you can tolerate before that interconnect becomes the bottleneck.

    Deliver a decent software solution and the hardware vendors will *gladly* abandon the current fascination with multi-core, which isn't terribly multi to be honest. They'd *love* to build machines that consisted of a relatively slow cluster node on a single chip, glued to its neighbours with a simple (sluggish) fabric. They'd cost about $10 a piece, a single blade would have several dozen, and they'd have about ten times the raw processing power of current day designs.

    Knuth is right, because multi-threading forces vendors to use the whole of main memory as the interconnect, and making that fast is hard. (Think: big, cheap, fast -- choose two.) On the other hand, it doesn't require programmers to *completely* abandon a strictly ordered model of computation. (Multi-threading experts would doubtless suggest that it does, really, but let's gloss over the bugs -- most programmers do.)

    Bill is wrong. Reworking everything to be multi-threaded is a waste because MT itself will prove unscalable once you get beyond pretty low powers of two (and Moore's Law suggests that this translates to just a few years). If you are going to completely re-architect your core code, the smart target is clusters, not multi-core.

    In the short term, you can run "cluster software" efficiently on "multi-core hardware", but the reverse is not true. In the long term, clusters scale well, but multi-core doesn't.

  19. Michael H.F. Wilkinson Silver badge

    Multi-threading has its uses, as does data-parallel coding

    I write parallel algorithms for various shared-memory, multi-processor systems (including the laptop I am working on now), and for some problems multi-threading is ideal, whereas others can be dealt with by data-parallel coding much more easily. The latter is usually the case when the problem can easily be cast in SIMD-like structure (do this thing on all those bits and pieces). OpenMP is a great tool for that: parallellism achieved through pragmas in an otherwise sequential program, quite similar to programming our old Crays (in Fortran (which I still HATE as a language, for all its useful parallel statements)). It is this type of problem which is most easily dealt with at compiler level, and many new methods for automatic generation of parallel code are tested on precisely this kind of problem. There are however many classes of problems that simply require intelligent analysis to arrive at an efficient parallel program. These latter may require explicitly writing a multi-threaded program. A set of fixed building blocks for e.g. barriers and the like would be very handy.

    I have a little knowledge of OCCAM and we did toy with the idea of getting one of many expansion boards sporting 4 or 10 transputers. That would have been neat.

    I have not yet tried my hand at using functional programming for our parallel work, but it really merits attention. Ultimately, I think we will be using a mixture of methods: we must teach parallel programming techniques to all programmers,in a variety of ways, so they can choose the most suitable approach for each particular problem. For some staying at the surface will be best, others will either want or need to dig deeper.

  20. John Savard Silver badge

    One comment:

    The sentence "Both from a practical and a philosophical perspective, as the industry must agree not just to take action on programming for parallel systems, but also decide what is the best approach to take." in the article seems not quite right to me.

    First, someone needs to find an approach to programming for parallel systems that works well. Given that, the agreement will be forthcoming.

  21. Eddie Edwards

    Silicon doesn't care what you wish for

    The hardware manufacturers will deliver what they can, not what computer programmers wish for. What programmers wish for is a single 100GHz thread and it just ain't gonna happen. After that, opinions may vary on what the "best" MP architecture is, but these opinions are hardly relevant. Hardware has to go where it can, and that means you won't see 1,024 processors sharing the same DIMM. Multithreading in the "shared memory" sense is already dead. As Ken says right above, the future is (probably) clusters, which design parallels the design of actual massively-parallel algorithms (by which I mean algorithms designed for 100s of processors, not algorithms retro-fitted to handle 3 or 4 threads). The Cell is a cluster on a chip. So is any modern GPU.

    Following this, software will follow. And not because the current experts will make it happen - to a man, they have spoken out against where hardware is going, rather than suggesting ways to approach it. (Their actual approach tends to be to shoe-horn existing code into 2 or 3 threads.)

    No, the next generation of software will follow from younger people to whom this hardware seems natural and obvious and who have no investment in the status quo.

    What we are seeing here, IMHO, is the death of a generation of software developers. Good riddance to Gates but I do hope Knuth stays along for the ride ...

  22. Graham Bartlett

    Transputer failure

    It's worth remembering that the Inmos Transputer didn't fail because it was technically bad or out-of-touch with industry requirements. It failed because Inmos said "right, we've designed this - now let's sit back and watch the money roll in". Or perhaps more likely, they couldn't *afford* to follow up on the initial success - remember that this was a British company and British venture capitalists will *never* invest in engineering companies. So while AMD and Intel were busy cranking the MHz, Inmos sat there and watched the world pass it by.

    Yeah, the Occam language was different - but it needed to be, if it was going to do parallelism effectively. Compare and contrast to the hoops you need to jump through for making threads/processes talk to each other.

    Now that single-core is well and truly buried, I can see the Occam principles making a comeback. Probably not the language - that's too far gone. But I can certainly envisage a time when the C and C++ languages incorporate "par" and "wait_for" statements (for example) which allows the compiler and/or OS to figure out what should run on each processor, hiding all details of threading and inter-process comms from the coder.

  23. Anonymous Coward
    Anonymous Coward

    @Ken Hagan

    There's most definitely a place for both. Multicore with memory sharing (if done properly) is very fast. Just right for jobs that need cpus in the low powers of two with much unavoidable data shuffling, and there are plenty of those.

    Not that that detracts from your point, which is valid in many jobs.

  24. Nicholas walton

    Software and hardware should be separate

    Have to agree with the problem of making good use of multi-threading being language based. While we stick with like of C++ we will never go far, the reason being that C++ and its relatives require hardware in the form of memory so code is tied to hardware. Until software can float between machines under its own control as it wishes multi-threading will always be limited. The mention of Erlang is welcome, but interesting work has also been done in the Gambit Scheme dialect using the Termite package

    Maxing out 8 cores on my desktop Mac required only around fifty or less lines of code, and no special consideration of inter-process communication, process locking, etc etc.

  25. Stern Fenster


    The Transputer didn't "fail". INMOS (a government-funded research outfit) was knocked on the head by Thatcher, who "didn't see the point of blue-sky research without immediate return". 'S what happens when you let a grosser run the place.

  26. Mister Cheese


    Windows can't even multitask properly - no wonder He can't understand the benefit of actually performing tasks simultaneously...

    Multi-threaded work has been going on for centuries:

    Arch-Deacon: Morning monks. Could you all please scribe copies of these bibles?

    Monks: (vow of silence, obviously).

    Point being that all the threads (monks) can get on with their own task irrespective of the progress of the other monks. Sure, they may have to share resource allocation (ie paper and ink) but they can probably do it in a polite and efficient way. Unless of course they're Trappist monks in which case they'd have more important brewing tasks to perform... is it Friday at last?

  27. Wolf

    Serial problems can't be parralleled

    It's all very well to talk about GPUs and problems that by their nature can be easily divided (such as video processing). But many common problems are by their nature serial--the next step requires the previous step to have been *completed* and can't be done until it is.

    Those kind of problems will never benefit from multithreading--except in the sense you can make the application run on one processor while the OS runs on another. Thread, core, CPU, call it what you want. You're talking about independent processes when you talk about multithreading.

    Of course a certain amount of multithreading makes sense. No point in waiting for the entire dataset to load if your serial processing will never catch up to the loading process. But if the loading process is *slower* than the serial process, well, you're done.

    Consider video streaming, for example. It makes perfect sense to have a pair of independent processes, one for playback and another for loading the cache. But when the bandwidth is such that the cache is emptied faster than it's might as well have a single thread, yes?

    Multithreading is not the performance panecea that its proponents claim, and never will be. *Some* tasks lend themselves to parallelization, others most emphatically do not.

  28. Brand Hilton
    Dead Vulture

    "Itanium", not "Titanium"

    The interview with Knuth is here:

    It's "Itanium", not "Titanium". "Titanium" makes absolutely no sense.

  29. E


    ...existing serial code can be very tough.

    Writing from the ground up to exploit parallelism or disjoint chunks of a program is not that difficult. If one writes stateless functions or methods (data operated on is passed in the function does not carry info between invocations) and separates the data out into structs or classes that do not do computation, then much of the difficulty goes away.

    I know we can't rewrite everything. But we ought not conflate the problem of making existing code multi-threaded with the problem of writing multi-threaded code.

  30. E

    @Nicholas Walton

    That's an interesting point about software under it's own control. But might it not be too high level a solution: the CPUs the code migrates to might still have multiple cores or 8 SPUs or what have you?

    Could you elaborate on your idea?

  31. William Old

    @Mister Cheese

    You should write more El Reg comments... you have a great sense of humour!

    And I enjoyed all of the discussion about single and multi-threading code complexity whilst thinking of the bag of spanners that is Windows... technically awful software, unreliable memory management, no consistent security model, and no true multi-user capability.

    Maybe Bill G should have bought all *nix derivatives, rebadged them as Windows 9, and done his usual superb post-branding marketing wheeze? *

    * I'd just like to make it crystal clear that this is a joke, not legally possible because of the way Linux is GPL'd, and that I'd rather pluck my eyes out with a blunt spoon than install "Windows Linux"... :-(

  32. Louis Savain

    Occam, Transputer & Academia

    Occam and the transputer failed because academics have a way of taking the simplest and most beautiful concepts and turning them into complex and ugly monsters. Sorry, Hoare. I always tell it like I see it. Besides, the transputer was a multithreaded computer, AFAIK. The ideal parallel computing model already exists in nature. It is called the brain. The brain is a pulsed (signal-based) neural network. It is a reactive parallel systems consisting of sensors and effectors. Our programs should likewise consist of sensors and effectors.

    Programmers have been simulating parallel systems like neural networks and cellular automata for decades. And they do it without using threads, mind you. All it takes is the following: a collection of elementary objects to be processed, two buffers and a loop. That is all. It is not rocket science. It is the most natural and effective way to implement parallelism. Two buffers are used to prevent the signal racing that would otherwise occur. It suffices to take this model down to the instruction level and incorporate the buffers and the loop in the processor hardware. The programmer should build parallel programs with parallel elements, not sequential elements. He or she should not even have to think about such things as the number of cores, scaling or load balancing. They should be automatic and transparent.

    If we had been using the correct programming model from the start, we would not be in the mess that we are in and you would not be reading this article and these comments. Transitioning from single core to multiple parallel cores would have been a relatively simple engineering problem, not the paradigm shifting crisis that it is. Multithreading (and its daddy, single threading) is a hideous and evil monster that was concocted in the halls of academia. Academics may jump up and down and foam at the moth when I say this but it is true: The computer academic community has shot computing in the foot. It is time for programmers and processor manufacturers to stop listening to the failed ideas of academia and do the right thing. We want fast, cheap, rock-solid supercomputing on our desktops, laptops, and even our cell (mobile) phones. And we want it yesterday.

  33. This post has been deleted by its author

  34. Anonymous Coward

    I hear the battle cry of 'there has to be a better way' too often

    Every few years there's a new 'movement' that promises to revolutionise programming. Mostly these involve around a less engineerng based process and the flavour of the month language, most of which come with some kind of holy war over the fine details of imperitive and/or OO programming and some kind of suggestion than any idiot can produce useful output. This is handy for the most part, as the market for 'learn X in Y days' or 'X for synthetic nipples' is lucrative to say the least. Unfortunately we're neglecting the underlying problem: we're endlessly simplifying the simple end of things and neglecting the useful end. There are a lot of 'better ways' that are actually 'better' in the sense they allow people who know what they're doing to concentrate on the hard problems.

    Functional Programming of the lazy and pure variety is a hard thing to get your head round, but in some cases can offer massive gains in both productivity and performance. But there's a price, the ten lines of code that took 10 minutes to write and *probably never needed debugging* that are equivalent to several days of swearing at a C compiler, well maybe 5% of people that claim to know 'how to program' have any exposure to the completely unimperative way of doing things. It's 'hard' to deal with arrays and meshes with sparse changes efficiently, but some day soon someone a bit smarter than me will crack that particular set of problems, and the other 95% of us will be left behind.

    If we take the {possible/improbable/whatever/allegedly inevitable} future as being FP there's just one worrying thing. The top producer of interesting papers on the subject appears to be our friends at Microsoft. There's going to be a breakthrough, but it'll never see the light of day unless the way in which microsoft sells software and manages releases of mass-market software changes beyond all recognition.

    And the pengin loving cynics might suggest that microsoft already have the answer and are sitting on it untill they can find a way to monetise it effectively. How can they could market something that's smaller, better performing and quicker to develop than NT6 and still convice people it's worth the money? I don't know. But then I live in a world where God rants about DOS 3 being the best OS ever any anyone who suggests using a mac gets replaced by robochrist.

  35. Anonymous Coward
    Anonymous Coward

    @Louis Savain

    What the hell are you talking about? CSP is an elegant and reasonably minimal mathematical formalism which has been implemented. I've looked on your blog and you have nothing similar that's working. Your COSA prototype of quicksorting <> looks appalling - have you got it running? how does it perform?

    Be constructive. Your page <> just badmouths everything.

    I don't like being rude but I'll tell it like I see it: Put up or stop slagging off other people's work.

    And this piece of blather does you no favours either <>.

    Lawd, the curse of the enthusiastic amateur.

  36. Nicholas walton


    See the paper I quoted on Termite which is open source BTW. The abstract is below.

    Termite Scheme is a variant of Scheme intended for distributed

    computing. It offers a simple and powerful concurrency model,

    inspired by the Erlang programming language, which is based on a

    message-passing model of concurrency.

    Our system is well suited for building custom protocols and ab-

    stractions for distributed computation. Its open network model al-

    lows for the building of non-centralized distributed applications.

    The possibility of failure is reflected in the model, and ways to

    handle failure are available in the language. We exploit the exis-

    tence of first class continuations in order to allow the expression of

    high-level concepts such as process migration.

    The real trick is three fold; being functional Termite does not need a store only an environment, secondly everything in Gambit can be serialised and sent over a network connection including running code, and finally continuations (or put simply what is going to happen next) are first class objects so you can stop a program, serialise it, transmit it to another location and continue running even across different architectures.

  37. James Gibbons

    Multi-Core is Good!

    Almost any real-time processing can benefit from multi-core CPUs. I specify quad cores whenever I come across a difficult process control problem that needs to respond quickly. There is a little threading problem called priority inversion that can cause delays in processing on single core systems and multi-cores solve it by allowing the lower priority locking thread to run along side the higher priority thread that would normally use a single core. Multiple cores can also process interrupts quicker and schedule independent threads running in parallel.

    As for the suitability of a certain language for multi-core work, I find that C++ can work as well as most any other. C# and Java have a little problem called garbage collection that tends to kick in and cause problems with thread execution. Microsoft has gone through many iterations of their garbage collector to try to fix this. C++ also has problems (memory allocation from the heap usually cause a global lock) but garbage collection can shut down multi-threading for long periods.

    The Intel C++ thread libraries, OpenMP and the new Qt 4.4 QtConcurrent module make multi-core work in C++ quite easy. The hard part is designing the program to make efficient use of the threading library. Holding onto critical sections during heavy computing periods will make any system run slow.

  38. Louis Savain

    @Anonymous Coward

    Coward writes: "Be constructive".

    I am being constructive. I have put together a comprehensive alternative to multithreading. Problem is, the kind of paradigm shift that I am promoting must be built on top of the ruins of the current programming model. There are no two ways about it. The current programming model simply sucks and someone has to say it. I don't mind doing it. Unlike some people I know, I ain't a coward. CSP is a failure because it is too hard to learn and it is not intuitive. It's a nerd language.

    As far as implementing COSA is concerned, unfortunately, I was not born with a silver spoon in my mouth, nor do I have a sponsor with deep pockets (this may change soon enough though). However, you can always write me a check for a few million bucks and I'll deliver a COSA-compliant multicore CPU (that will blow everything out there out of the water), a COSA OS, a full COSA desktop computer and a set of drag-and-drop development tools for you to play with. It should not take more than two years. There is no reason that a multicore processor cannot do fine grain, deterministic parallel processing and handle anything you can throw at it with equal ease. There is no reason to have specialized GPUs or or heterogeenous processors. One processor should be able to handle everything if it is designed properly. A fast universal multicore processor is what the market wants.

    In the meantime, I will continue to bash everything and everybody that needs to be bashed, including CSP and Hoare. If you think this is bad, you haven't seen me bash the functional programming crowd (e.g., Erlang and the rest). LOL. BTW, if you are offended by my Bible stuff, don't read what write and stay away from my blog. It's not meant for you.

  39. Destroy All Monsters Silver badge

    Threads are Terrorism!

    "Threads aren't hard provided one thinks about using them from the outset. Trying to apply threads to an existing app is a hiding to nothing. The mechanisms are dead simple - all you need is thread spawn/join, pipes and semaphores."

    Well, NO. The primitives are reasonable for the thread model (but the thread model is, well, ...bletch). Still, getting a multithreaded app to work happens either if the threads do not communicate all that much (as in, one main processign thread and UI/socket worker threads) or they communicate over a nice message-passing abstraction or synchronizing datastructure - like a database. Otherwise, the program is unmaintainable and most probably wrong the moment you lift the fingers from the keyboard (experience, experience, and pain, lots of pain...). And no, NO-ONE WANTS TO USE ANY TOOLS (even if good) TO WRITE MULTITHREADED PROGRAMS, thank you very much.

    Anyway, threads are a Bad Idea on the road to nowhere as they are the wrong abstraction for a nonexistent problem on shared-memory multiprocessors. Notice how frameworks like Java Enterprise hide the threads, leaving users to basically write only listeners?

    And anyway, this has nothing to do with multicore CPUs or message-passing CPU clusters, which are always welcome as long as the programming language on top of it is able to use the underlying performance.

    Threads non merci:

  40. phat shantz
    Paris Hilton

    We need to build the better programmer

    Fortunately for me, and unfortunately for the industry, there exists no ogre at the gate leading to the programming kingdom. The guard post is empty and they let anyone in, hoping that some of them are actually programmers and can do what they say.

    Multi-core, parallelism, multi-threading, and the whole battalion of integrating hardware with software (or is that integrating software with hardware) may very well be that gatekeeper. Those who can learn this complex and inter-related algorithmic ballet will be knighted as Programmers of the Multi-Thread. Those who cannot will be relegated to other kingdoms or as serfs who toady to the heroes of the kingdom.

    Realizing that there exists only a certain percentage of the population who can comprehend the complexities of these issues, the two camps are revealed as nothing more than self-aggrandizing or self-protection groups.

    The group who thinks they understand the complexity of multi-threading, parallel processing, non-serial architecture, and all future issues of these trivialities bloviate about the underlying simplicity of such obvious matters. This is hardly the truth as these are very complex engineering, mathematical, philosophical, physical, artistic, and architectural challenges. Even the folks who think they understand these questions do not; otherwise there would already be a solution to the mis-matches in concepts.

    Those who demand that the software tend to the complexities and let programmers "do something useful," are demanding that the software itself do the useful part. If I have a machine that can do your job, why do I need you?

    And the cheerleaders and sycophants who declare their idol as the "one and true language," are nothing more than one more group to vilify in the future. Everyone's language fails at doing what it was never designed to do. There are still instruments running on 25-year-old chips that many condescend to say are irrelevant to us. Yet, these chips and their ancient languages are running environmental systems, watering lawns, and monitoring the engines in motorcars. One plucky fellow actually got a Babbage engine to work. Who figured?

    The truth is that a very small group of people will be able to understand the problem. A much smaller group (perhaps as few as two or three) will be able to construct a workable solution. This is the nature of invention.

    Once the solution is known, there will be a very small community who know, teach, and use these very complex solutions. The solution to this problem, too, is not "THE" solution to every "Problem Universal." It will solve a small set of problems from within a very large queue.

    A workable solution will lead to other questions and the presentation of a workable solution will not automatically guarantee a universal translation to everyone's platform or task. Not everything needs multi-threaded or multi-core solutions. Some things are just simple.

    Some things won't even have an answer in multi-threading, parallelism, or multi-core hardware. Some algorithmic questions are even more difficult, misunderstood to this day, or just don't lend themselves to our physics.

    Paris is one such problem. I do not propose to solve that with computers, but by turning off the TV. (I believe some hard problems have easy answers.)

    For many, this may be another solution without a real problem. Some things take time. Growing a tomato takes time. No parallel multi-core algorithm will hasten that. (Has anyone here actually read the Mythical Man Month?) Some problems are only those created by folks who want to host a money-making enterprise by re-creating the glass-room mainframe theology -- with high-speed.

    As soon as someone comes up with a solution, all sorts of real problems that don't yet have answers will be found. (The laser was invented in the '50s but was not very impressive because no one knew what problems it might solve. Foresight is not a natural trait of the inventor.) Until that happens, the current solutions will be used for our current set of problems and the captains of industry will just have to be content that we don't still have rooms with 300 data entry clerks and typists, sitting at Orwelianly-similar desks, banging out the work that can be done in the same amount of time on an Osborne.

    The real limit on our industry is the bell-curve. Some programmers are just naturally smarter than others. Yet, the industry needs programmers who are smarter than the general population. So does every industry. Doctors and lawyers and engineers and physicists and plumbers and auto-mechanics and green-grocers and almost everyone except Paris needs to be smarter than the general population in order for our society to advance. Paris proves that there can be the occasional outlier, but we won't be treated kindly by history if our contribution to society and culture is buying shoes.

    The smarter programmers (those who are smarter than he smart ones who are already smarter than the general population) will migrate to the problems of multi-processor, parallel processing, multi-threading, multi-core, multi-multi-multi that is expected to solve the problems of bad management, poor planning bad architecture, flagging business models, antiquated products, constricting global markets, and the eventual super-nova of Sol.

    The regular programmers (who are smarter than the general population but are not smarter than the average of the programming population) will continue to plug away as "developers" and "producers" and "do something useful," as they call it -- in order to justify the fact that they aren't any smarter than the programming population, although they are smarter than the general population.)

    These programmers will bemoan the fact that there may actually be an ogre at the gate of some computing kingdoms. These programmers wil have to make do with their own kingdom, of their own making. We'll call it the "Useful Kingdom."

    These programmers will have to wait until someone solves the problem and then someone else packages it so they can use it in a way that separates them from trying to figure out when and where they should use the solution for their "useful" products.

    Until then, these articles and the ensuing dust-clouds they create are my form of entertainment; having eschewed the likes of Paris.

    Here's to you kid. We'll always have Paris.

  41. Anonymous Coward
    Anonymous Coward

    Enough talk Louis

    I don't think you even understand what CSP actually is. Start here <>.

    Putting together 'a comprehensive alternative to multithreading' is not the same as producing a deliverable. You say that the current programming model simply sucks, and I agree, and I'm working on a small project of my own. In other words, a deliverable. When and if I get something fit to release then I will shout about it, not before.

    Stop bashing everything based on your lack of experience. You make a basic logic mistake by disliking Erlang and extending your dislike to the entire world of functional programming. An instance of the concept is not the entire concept.

    You excuse yourself from doing any work on the grounds that you were not born with a silver spoon in your mouth. Well, surprise, neither was I, but I'm quite capable of putting together a workable compiler at short notice. We don't need an OS built on a brand new CPU architecture, just a working language so we can get a flavour of your system and if it's good as you think then it will be the best invitation for other people to join your project. You can write a compiler can't you?

    Lest you think I'm being destructive in my criticism, your COSA bears a striking prima facie resemblance to a computing model being developed by some colleagues of mine who have decades of academic, mathematical and industrial experience, so take heart.

    But, FFS, stop talking and put something on the table. I've wasted enough of my life on people who can talk but can't (or can't be arsed to) do.

  42. Destroy All Monsters Silver badge


    "Besides, the transputer was a multithreaded computer, AFAIK"

    I think you are confused. I would think threads show up at the level of the _operating system_, not of the _hardware_. They may or may not be supported by some special hardware tricks of course. In the last ten years for example we have seen the rise of "Simultaneous Multithreading", whereby some registers are duplicated to enable the CPU to run two instruction streams in the some program context. Citing from the abstract of "A commercial multithreaded RISC processor", Feb 1998:

    "Hardware multithreading is a technique for tolerating memory latency by utilizing otherwise idle cycles in the CPU. This requires the replication of the processor architecture registers for each thread."

    Looks like the T9000 had just bog-standard multiprocessing support and MMU, as was the custom in those times. I paper I got her, "The T9000 Transputer", dated 1992 just says that it has an "improved process model with per-process error handling", which sounds about right. Interrupt, switch context, innit?

  43. Anonymous Coward

    @ Simon

    "Parallelism is the way forward - without it you cannot hope to compete with biological systems on a performance front, the human brain is after all a massively parallel system with millions of neurons that are fully independent computers."

    A living mind isn't a computer, you poor thing.

    The idea of hardware and software giving rise to anything even approaching simple self-awareness is an idea that died at least 20 years ago.

    AI has moved on quite apace. No one seriously expects to do it using digital computers. That's gone. A philosophical dead-end.

    What we know (or think we know) about the organisation of even a primitive brain is fatally flawed and seriously, hopelessly wrong. Your statement proves this time and again.

    Biological systems alone give rise to the qualities you mention, purely as a survival requirement. Engineered Biological systems *will* give rise to true, strong AI — in a few centuries. That's the current way forward, but we're woefully ill-equipped to even being the long hike to that goal.

    Multi-core processors and software never will.

  44. Anonymous Coward

    My two cents (or farthings)...

    Everybody's going about this the wrong way. First, you build the fastest, speediest OS possible. THEN you make a language that will handle every situation an quickly as possible. If the OS is slow, so too will be the code. If the OS is fast, the code is only as slow as the programmer makes it.

    Since Microsoft is charging so much these days, they can afford to build a new OS from the ground up. They should scrap Windows completely, eschew Unix/Linux completely, and write a new OS that's worthy of this day and age. And include compilers for all this multi-core/parallel crap in the OS, too (instead of charging for it as an add-on).

    That's one cent (or farthing) per paragraph...except for this one. Oh damn.

  45. Louis Savain

    @Anonymous Coward

    Yo, Coward,

    COSA is based on a well-known technique that has been used by programmers for decades to emulate deterministic parallelism in such apps as neural networks, cellular automata, video games, simulations, and even VHDL. It's not rocket science. It requires nothing more than a collection of elementary objects to be processed, two buffers and an endless loop. Nobody can lay claim to having invented this technique since it is pretty much obvious and has been around from day one. So don't tell me that your academic buddies invented it. That is simply bull. Besides, I want neither help nor approval from academia. COSA is dear to me but if its success must depend on academic approval, I would rather see it fail.

    Again, the technique I mentioned above is not rocket science. I simply took it down to the instruction level and added some bells and whistles and a reactive envelope in order to turn it into a programming model. However, since current processors are designed and optimized for the algorithm, COSA would be way too slow because COSA is non-algorithmic. It must be implemented on its own COSA-compliant processor for performance reasons. Even though I could demonstrate the reliability aspect of COSA on an existing machine, it turns out that nobody really cares about anything in the computer business other than performance, productivity and low power usage. I found that out the hard way. COSA would be way too slow if implemented in software. However, given a specially designed processor, it would blow everything out the water in terms of speed and universality. It would make both thread-based multicore processors and GPUs obsolete. You will not believe how many times the engineering folks at Intel, NVidia, AMD, Texas Instruments, Berkeley, Stanford, UIUC, etc... (and even financial houses like JP Morgan and Morgan Stanley) have visited my blog and the COSA site lately. I keep a record. These guys know what I got and they know I'm right but they're scared to death. I also make enemies because I tell it like I see it so I'm sure I get badmouthed a lot. It's funny but I don't care.

    So again, write me a fat check and I will deliver. Unless I see some real cash, I ain't lifting a finger because cash tells me two things: 1) My idea is good enough to attract a sponsor and 2) the sponsor/investor is serious. If the industry is not willing to invest in COSA, it does not deserve it. After all, several universities in the US and abroad are getting tens of millions of dollars from the industry to solve the parallel programming problem, and they don't even have a viable solution, just the same old crap. Heck, the computer science community is the reason that we are in the mess that we are in. If they can attract money, so can I, because I do have something that they don't have: a freaking solution to their freaking problem. Until then, I'll just keep writing stuff on my blog until the pain gets to be so unbearable for Intel and the rest that they'll just have to come knocking on my door or acknowledge that I'm right. I can wait. I have learned to be very patient. Besides, it's a lot of fun.

  46. christopher

    Quantum Processing.

    We are nearly there. Quantum computing actually solves the linear processing problem that hinders traditional Multithreading.

    Just hope someone from the quantum world is reading this & can give us an estimate as to when we will see a prototype cell.

  47. cloudberry


    Either you're a very, very good windup artist, or a common crank. :


    And in a 1998 UseNet post, the mathematician John Baez humorously proposed a "checklist", the Crackpot index, intended to "diagnose" cranky beliefs regarding contemporary physics.[2]

    According to these authors, virtually universal characteristics of cranks include:

    1. Cranks overestimate their own knowledge and ability, and underestimate that of acknowledged experts.

    2. Cranks insist that their alleged discoveries are urgently important.

    3. Cranks rarely if ever acknowledge any error, no matter how trivial.

    4. Cranks love to talk about their own beliefs, often in inappropriate social situations, but they tend to be bad listeners, and often appear to be uninterested in anyone else's experience or opinions.


    Some cranks claim vast knowledge of any relevant literature, while others claim that familiarity with previous work is entirely unnecessary; regardless, cranks inevitably reveal that whether or not they believe themselves to be knowledgeable concerning relevant matters of fact, mainstream opinion, or previous work, they are not in fact well-informed concerning the topic of their belief.

    In addition, many cranks

    1. seriously misunderstand the mainstream opinion to which they believe that they are objecting,

    2. stress that they have been working out their ideas for many decades, and claim that this fact alone entails that their belief cannot be dismissed as resting upon some simple error,

    3. compare themselves with Galileo or Copernicus, implying that the mere unpopularity of some belief is in itself evidence of plausibility,

    4. claim that their ideas are being suppressed by secret intelligence organizations, mainstream science, powerful business interests, or other groups which, they allege, are terrified by the possibility of their allegedly revolutionary insights becoming widely known,

    5. appear to regard themselves as persons of unique historical importance.

  48. Louis Savain

    I Am a Crank and Proud of It

    Yo, Cloudberry,

    I am a crank and a crackpot. I say so on my blog. Click on the link, "Who Am I?".

  49. Anonymous Coward
    Anonymous Coward

    Horses for courses

    Even in the 80s there were many ways to make parallel program tasks.

    On the hardware side people were building grids/nets/arrays of processors, typically using shared memory or message passing. The 8086 had atomic exchange instructions, for example. The CISC had pipeline instruction decoding. People considered having a processor for each pixel (or small group of pixels) on a graphics card.

    On the software side, you had:

    pipelines: each stage of a problem ran on a different process,

    data-flow: like a spread-sheet intermediate results could be calculated in parallel: consider (a + b) * (c + d) - 'a + b' and 'c + d' can be done in parallel,

    monitors: protection of a shared resource with queues,

    computer graphic films/movies had server farms to render the images to be recorded on photographic film - NeXT Zilla.

    The low-level semaphores always seemed to have problems with deadlock/deadly-embrace - the programmer would lock something but forget to unlock it - two processes would need the same two resources but one would lock A first and wait for B while the other would the lock B first and wait for A

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019