Re: Nobody wants...
> I think that one of the problems, oddly enough, is the move away from the waterfall model, with it's stages of system test -> integration test -> user acceptance test.
For what it's worth, if anyone can bear to listen to an old fart... years ago when I developed 4GL systems we would code and unit test our own work then *demonstrate* it[1] to a peer, who then did the testing on it.
This had two key benefits: firstly it was amazing how often something mysteriously failed when it came to the demonstration [2]; and secondly if the test needed some fiddly data to be set up (think in terms of an bank account that needs a small overdraft; then 2 failed direct debits; then a small payment received; in that order before you can test that the right customer letter is sent out) you could show the tester what was required. This saved getting numerous 'failures' because the tester didn't understand the prerequisites rather than any actual error in the code.
If the demonstration worked with no obvious errors (just a simple path through - it's not the full test remember) then the peer would take over to properly test and code review. (The code review was to ensure coding standards were being followed.) If it passed that, only then would it get marked as complete.
I don't see any reason why agile teams can't adopt a similar approach. Everyone does some coding. Everyone does some testing. You all get to see each other's work which is good for learning - both from the good examples and from the bad.
(NB This was not the only testing, of course: there were still formal test scripts in order to show coverage etc. )
[1] 'It' in this case is the equivalent of a user story.
[2] Which puts the emphasis on simple, reliable code that works first time. The risk of embarrassment when demonstrating is a powerful incentive to be thorough.