So it's fuzz testing then.
Not news people.
American computer boffins say they have developed new software which makes programming of multi-processor machines much easier. "With older, single-processor systems, computers behave exactly the same way as long as you give the same commands. Today's computers are non-deterministic," says Luis Ceze, computer science and …
So it's fuzz testing then.
Not news people.
My guess is that they invented a mechanism of randomly delaying the execution of a thread or the delivery of a message. If it is that, it is an obvious idea. Far from proving correctness, though.
But maybe that is all we need to make robust multithreaded applications.....
"American computer boffins say they have developed new software which makes programming of multi-processor machines much easier."
"Even if you give the same set of commands, you might get a different result."
I have found this with windows since version 1.0, even with only 1 processor :)
Takes me back a few years. Along with Amdahl's law.
I wonder if there is any mileage in a super-duper new software development tool using some BS description of it but which in reality:-
1)Re-writes the FORTRAN/C/C++ program into Occam
2)Runs relevant optimization rules
3)Reads description of processor array
4)Distributes relevant bits as communication sequential processes.
Who'd-a-thunk it, eh?
...we're inventing new phrases for race hazards then? :)
I would have thought the hazards of talking about race in our politically correct country would be obvious to anyone.
Perhaps we should simply force all mutli-core programmers to actually read a book about threading rather than develop frameworks that deal with threads "behind the scenes" and hope it works.
<insert "teach a man to fish"-type comment here and mutter something about understanding a concept>
I say we just give them a 100 core machine with no OS and tell them to "go to town." They can keep the machine as long as they forgo internet access until the solve <insert task> using all cores. They will either get it, give up, or go insane.
give a man a fire and you'll keep him warm for the night, light a man on fire and you'll keep him warm for the rest of his life....
Teach a man to fish and you feed him for life
Set a man on fire and you can have all the fish he caught and someone to cook them for you.
Always amazed at what profs think makes for a viable start-up company.
Locks, mutexes, etc., have existed for over 40 years to deal with EXACTLY these problems; some systems of the past, the processors did not even have to run at the same speed. Wire temperature causing bugs? No! That's poor programming, you CANNOT rely on threads completing in any particular order or length of time, even if one is very short-running and the other very long -- if you do your code is buggy, period.
Of course, these boffins in the article know this -- they seem to be developing tools to test for and find these types of faults. Which I actually think is a good thing. But really it's more important to either 1) Know how to write multithreaded code. Or 2) DON'T WRITE IT!!
There are tasks that parallelize naturally -- particularly the general "crunching through a big array of numbers" type of problems. However, for these, knowing where your lock(s) have to go is a fairly natural process. Using locks is easy, but really, it's better to start with lower-performing code until you prove you DON'T need some extra locks than to leave them out and have random faults in your code. Seriously. If it's really hard to tell what data structures need locks, please PLEASE consider option 2 and DON'T thread it just for fun! If I buy a quad core, it's NOT so someone can write an extra-bloated multithreaded word processor (or whatever single app) that kills all 4 cores -- it's so I can run a bunch of apps at the same time. Of course I don't think I'll have this problem -- I'm running Ubuntu and Gentoo, the whole "thread everything!" view seems to be a Windows thing from what I can tell.
It's all old technology that wasn't invented there, but it works, and can be proven to do so.
University of Washington has a very strong CS and computer architecture program, so I'm sure this group is doing something cool, but there's absolutely no detail in this article about what it is. We all know that multi-threaded code is hard, and we all know that writing it correctly is becoming more important as we move towards tens of cores on a chip. What exactly is it that they are doing to help detect concurrency bugs? This is a hard problem, and any novel approaches are certainly interesting.