Moore's Law, I need hardly remind a top-notch industry professional like you, states that as the density of silicon circuitry doubles, the probability of you not being able to find some sensibly-priced extra memory to fit your old lappy approaches 1.0. In recent times it has become generally admitted that, if this well-known …
Oh, it's a parody!
I got most of the way through page one thinking this was an unconventional description of actual computer programming, and not a bizarre waste of time. I twigged at the point where we were all sharing one lavatory tucubicle. Well done! You had me going! Are you a professional programmer who doesn't want competition for jobs? This isn't just non-information, it's un-information; I am less knowledgeable than I was before I read it, and if I ever want to get into thread programming myself, I will have to un-e-learn this material first.
How to stop when you're behind.
So, there you are: you've just made a dumb mistake, totally failed to recognize an obvious spoof when it was as plain as the nose on your face and read half-way through a humourous article without once getting the joke. You're feeling embarassed and humiliated by your own stupid mistake. What's the first thing you do?
If you said "Post a whiny comment about how dare they waste your valuable time", in order that this previously-private embarrassment should be exposed to the entire world in order to humiliate yourself even further and this time in public, you need to rethink your strategy for failing at life.
In short: no, we're not going to put <joke> tags on the entire internet just for the benefit of whiney dumbasses like you.
like drowning kittens in a bucket of water.
not just a keyboard, but a screen, two phones and a mouse charger too...
You sir, are a humourless arse
Verity, as usual, an excellent piece, as all of yours have been for the many years that Robert Carnegie has apparently not been reading them.
His loss (along with his sense of humour).
Loved the Erlang bit. That should be given to every first year functional programming student to see how long they take to spot the problem. Those that can't see what's wrong should immediately change to a pure maths course to save the rest of us from their code.
Oh No it's Not! (parody)
Panto season started early this year Mr Carnegie?
Having spent twenty-five years with threading and multi-processor programming, I'd say that this is probably the finest introduction to the subject that you'll ever come across.
This was obviously a fun article, but it also reminded me to get off my ass and brush up on my locking. The second link on mutexes and semaphores (feabhas.com) is the first in a fantastic three part series, and I wouldn't have found it otherwise. Thanks Verity, I learned something.
Stob it! It's all too much
"..and obtaining a deadlock holiday, .."
I actually anticipated that one. I feel so proud of myself :) ........btw.....well done!
Stob congratulating yourself!
Yes, well done on "anticipating" that line from the article. Now what was the title of the piece again....?
Your "anticipation" may have been quite well prompted by that you know.
I'm shocked that Verity appears to be new to you.
Brings back memories of ripping them out of a project where someone had found they could pass a larger table of values to a thirdparty library for processing without crashing if using fibres. User could only determine whether system was still running or crashed after 30 mins (GUI stayed same either way) - if the task manager stayed at 100% (still working)
And they hadn't heard of Mutexes or such
Looked less funny when I got a class about it.... Verity rules!
When it comes to programming, if Donald Knuth is the daddy then Verity Stob is the mummy.
As for Robert Carnegie - I'll actually be using the toilet cubicle analogy in any future explanations to juniors about just what mutexes are for.
I don't like threading- I love it!
% Exploit the fact that ∞! = ∞;
inf with nan behaviour?? Aaargh, you actually got me checking it!
Some outstanding stobbery <http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html>.
Also thanks soooo much for the mention of desktop tower defence. I had to find out what it was, lost weeks.
Thanks for that. Someone owes me a keyboard, but I'm not sure whether it's the bloke wot wrote it or you for bringing it to my attention.
Threads? All a bit 90s.
With copy on write and other efficiencies implemented in the unix process forking these days not to mention all the shiny new posix semaphores and message queues , who on earth would bother with the hassle of race conditions and deadlocks by using pthreads? I know Windows coders have no choice since that lame OS still hasn't implemented fork() or any kind of complex process control but in Unix theres really no excuse for creating some dogs dinner multi threaded code any more.
to go to the toilet now....
volatile is not really useful for multithreading, and never has been.
Where volatile shines, is when dealing with memory-mapped I/O. It essentially tells the compiler, "Don't optimize references to this variable -- it can change out from under you". This is not necessarily thread-safe, but makes writing device drivers a heck of a lot easier.
Boom shang erlang
Do we have to wear flares to write multi-threaded programs?
why a title?
No, just tartan
I've never heard mutex described with the uses of toilet cubicals before, but I shall be using that simile in future! great stuff...
The title is required, and must contain letters and/or digits.
"save all the tedious mucking about" - pure Douglas Adams.
If you want
more information, try "Concurrent systems" by Jean Bacon (try for the first edition.. my favourite book)
She goes into a lot greater detail with the subject matter with not a full toilet in sight, and the only drowning that happens is your brain cells in information.
Personally I prefer my threads to be given the data to process, then told to naff off and get on with it and only report back if theres a problem.
But real tech heads dont bother with multi-threading because interupt driven assembly code is far more fun to play with.
especially when you are down to comparing the number of clock cycles used to run each route through the code
I guess its nice to see a proper technical subject dealt with by el reg rather than the obsession with sharks that have lasers mounted on their heads thats been occuring lately
>"Whereas it used to take just one running instance of Access 2000 to bring your CPU usage meter to 100%, it now takes two, four or possibly 128."
<clicks the link>
This behavior does not occur in Microsoft Access 2000."
Even funnier, that statement is followed by a section entitled "Steps to reproduce the behaviour" with instructions on how to make it do what it doesn't.
You should have scrolled down a bit further.
Err, no. It says that the behaviour does not occur in *Access 2000*. You falsely inferred from that that it doesn't happen at all, when it merely means that it happens with /other/ versions of Access; note that the KB is titled "MS Access Shows 100% cpu", not "MS Access 2000 Shows...".
Hence why it was funny that VS specifically said "Access 2000", instead of just "Access". Obviously checking to see if we're paying proper attention!
1. The behaviour is not observable because the sales team are all well-drilled in how to run a staged demo of the system to a client, and will distract them at the crucial moment by saying "Oh, what's that over there" and pointing behind them, or otherwise distracting or making them blink. By the time the client finally discovers the system isn't thread-safe, the ink will long since have dried on the contract - and THAT's when we hit them for support!
2. Trendy is as trendy does. If we're asking from the perspective of a retro-futurist 70s revival, then clearly the Transputer has sandals, beads, beards, corduroy flairs, and all that Open University zeitgeist. The GPU on the other hand is fatally contaminated with the auroa of hardcore gaming nerds, and so will never be any more trendy than a trainspotter's notebook; it may be full of facts and numbers, but it is essentially dull. And slightly distasteful.
This isn't relevant at all, but I want to share.
I bought a couple of drinks from a hotel bar today. The till was running windows XP, so it crashed. It seems it didn't like the lemonade I wanted in my Perno.
I'm a PC and I... what was I saying again?
What, no mention of that Apple thingy with the odd Ruby-like blocks plonked into C?
...I have no idea what La Stob was on about in that piece, but it didn't stop me laughing like a laughing thing. Yay for VS!
Toilets? In India?
Imagine the same scenario at some highway stop in India.
You ask for directions to the toilet and the desk (?) points you to the wide open fields behind. What do you lock? How do you protect?
This is what is known as the "lockless" protocol.
And yes, it frequently does result in processes crapping all over each other. The metaphor is perfect!
Modula-2 and OccamPi
Unlike C and C++, Modula-2 was supporting Co-routines, Stack-frames, Mutex, and Semaphores -- AS PART OF THE LANGUAGE -- over 25 years ago, without any OS support. I have Z80, x86 16bit and x86 32bit Compilers that produce very nice multitasking code for CP/M, DOS, Win16, OS/2 and Win32. In theory the design was to support Multiple Processors too, but my compilers don't have that.
I played a bit with Occam back in 1984, but it was more of a demo toy then. This is worth a look if you want an OS properly designed for Multicore/multicpu.multithread http://rmox.net/ using OccamPi
Additionally (which Apple tries to solve with Grand Central) it's easier to design for a known number of cores/processors rather than an unknown.
It's a pain when the co-routine when inside the Mutex doesn't flush the toilet before leaving. Or even goes with poor aim without sitting down.
Back when I was a puppy, and Kevlin was just a kitten, we used to attempt to shave time off tasks using Occam's razor.
Then Gene Amdahl explained that there's a tradeoff between the speed gain of parallelisation and the overhead of breaking the task into little bits and reassembling the answers.
I suspect this is the reason behind the apparently rather hefty recommended grainsize in a parallel_for loop.
- Review Is it an iPad? Is it a MacBook Air? No, it's a Surface Pro 3
- Hello, police, El Reg here. Are we a bunch of terrorists now?
- Video of US journalist 'beheading' pulled from social media
- Netflix swallows yet another bitter pill, inks peering deal with TWC
- The Register to boldly go where no Vulture has gone before: The WEEKEND