Clichés like ‘can’t measure, can’t manage’ stick around precisely because they are so difficult to achieve. Providing business managers with information about what their staff are up to is something that should be a fundamental use of IT, and indeed we have all spent happy hours tapping away our timesheets and capturing our …
We need a bullshit-o-meter
Ah yes, the madness of quantifying productivity.
I have not had one single complaint about the quality or quantity of my work for over a decade (explainable exceptions excluded). Not because I am so very good, but because with a little experience the work isn't all that difficult.
But now we need to put our signatures on a sheet with some random aspects of what the job entails and this goes through a computer and that produces a spreadsheet and now my manager knows what I do all day. Ain't technology wonderful??
Now my dilemma. If I be a good obedient worker and do what's on the list, I do things that don't need to be done and don't do things that do need to be done (the list isn't complete you know). If I work like I used to, and thus achieve the results expected from me, I can't sign off all the little things on the list and my manager will get notified of this. But worst still, it'll be on record!
My co-workers approach is: sign the list blindly but go about your business as usual.
But that, my friend, is falsehood in writing.
And there's the hearth of the problem. If you can't measure productivity because you can't quantify the tasks that come with the job in measurable units, then a silly spreadsheet has no place in measuring performance. It's just nonsense really.
The irony is of course that in the old days you found yourself at the receiving side of a 'good conversation' for not doing your job properly. But not anymore, oh no! Us workers get our boss angry at us for not signing the list. And that takes the pleasure out of my easy and low paying but fun job.
About a year ago they installed a help desk app. I'm not supposed to do anything other then routine stuff without a help desk request. Most of the time the help desk is more of a problem then the actual problem.
Users have to pick a category, then a sub-category. They may not have a clue what the actual problem is so they don't pick the correct category and it goes to the wrong person, It can take hours for a 5 minute problem to get fixed. Most of our users are not stupid so they skip the help desk and call me direct. The only time I make they do a help desk request is if it requires the OK from their manager (install non-standard software is a big one) or it's something I can't fix (they want a bigger mailbox is the big one). I don't get any pain from this since my manager knows what a big waste of time it is.
If I detect a problem my self (say I get an alert from the photo copier that the toner overflow is full) I'm supposed to fill out a help desk request to my self... fix the problem then complete the help desk and note the time spent. If I'm not doing much at the time I'll fill out the help desk just so I can put Time: change toner bottle, 1 minute. Fill out help desk 10 minutes. Even then I cheat and fix the problem first, then do the help desk. :)
"Providing business managers with information about what their staff are up to is something that should be a fundamental use of IT". I don't think so, that's what CCTV is for isn't it?
Really measuring, to really manage
You make a string of great points, Jon, so I'm just going to take them in the order you make them.
Yes, we manage what we measure, what counts gets counted, etc. Attention is selective and we focus on what's salient. But what are we managing if we're not really measuring anything? Well, then, we're probably just allowing emotions, prejudices, biases, politics, or what have you to drive us, which is of course not what we intend.
So we wind up with flavor of the week trends in accountability and quality improvement with big bursts of energy that fade with no effect until the next whirlwind blows up. And like you say, the main result is the compilation of historical databases gathering dust.
But you're right, there ought to be some way to situate measurement right in the work flow, with information generated and used on the fly. What we need are tools that give us useful information when and where it is needed, not in some report that arrives six months later. That is not to say that the data won't be stored for use in multiple applications. And making sure that the person who has it knows what to do with it is vital, of course.
And this is where we get at the crux of the problem. Most of the numbers we think of as quantities and measures are in actual fact neither. Just because a number is called a measure does not mean that it stands for something that adds up the way numbers do. Real measures are read off calibrated instruments, where some science has gone into determining whether the thing supposed to be measured actually varies in a way that can be mapped on a number line.
You brought up the GIGO principle, and that is exactly what applies here. Unless the things that are counted up actually represent some one thing that consistently varies from more to less across the particulars of the sample counted, the person counting, the time, the place, etc., then all we have is Garbage In and Garbage Out.
Knowing what to measure has to be guided, of course, by people who know the job and its demands, and the process has to fit into the workflow, absolutely. But meeting these requirements is not sufficient to the task of measuring. Knowing what to measure also has to be guided by the basic mathematics of what makes things add up the way numbers do. Without that, we are stuck with numbers that add up just fine (they always do!) but which then fool us into thinking we're dealing with something real, when all we're really doing is chasing our tails.
And then, Jon, you very perceptively speak to the difference between what's possible in principle, and what's usefully achievable in practice. The Danish mathematician, Georg Rasch, inventor of a very special class of probabilistic measurement models, wrote that models are not meant to be true, but to be useful. His work informs computerized measurement methods that put calibrated tools into the hands of end users.
For more information on Rasch's models for unidimensional measurement, including software, full text articles, consultants, meetings, etc. see www.Rasch.org. For a leading example of what measurement based in counts, ratings, tests, surveys, assessments, checklists, rankings, etc looks like, go to www.lexile.com. For my particular take on the relationship between measurement and capital, see www.LivingCapitalMetrics.com.
- Updated Microsoft Azure goes TITSUP (Total Inability To Support Usual Performance)
- The Return of BSOD: Does ANYONE trust Microsoft patches?
- Munich considers dumping Linux for ... GULP ... Windows!
- Review Apple takes blade to 13-inch MacBook Pro with Retina display
- Pic iPhone 6 flip tip slips in Aussie's clip: Apple's 'reversible USB' leaks