Almost no-one disagrees with the idea of testing, writes David Norfolk. but many people fail to follow an uncompromising test-centric process. Recently, I had the chance to ask Richard Collins, Development Strategist at a specialist software vendor, why he believes that test-driven development is the way to build better software …
Testing first I think...
"Should testing drives development or development drive testing?"
Judging by the typo in the above sentence, I suggest that testing might be worth considering as the driver. ;-)
Testing is mindnumbingly boring
As a PL/SQL analyst developer and former tester, I can tell you that one of the biggest problems with software is not the errors which inevitably occur with any large piece of software, but the lack of adequate error handling.
Developers all too often regard error handling as an afterthought rather than an integral part of the code. Dumping an error code and a string of technical jargon is of no practical use to a user. Capturing foreseeable events and giving appropriate messages is.
Developers must always work on the principle :
"If the user CAN do it, then, however unlikely, at some point they WILL do it."
The other major issue with testing is boredom. End-user testing is one of the most tedious jobs I have ever done, although unit testing comes a close second.
The reason for this is not the tests themselves, but the ridiculous mountains of paperwork which are demanded for audit trails. This means that nearly ninety per cent of my unit testing time is taken up writing test scenarios, expected results etc. This, in an unhelpful twist, has the effect of encouraging developers to cut corners when testing, to avoid getting bogged down in bureaucracy.
Testing should be about ensuring that software is fit for purpose, not protecting managers from failing a Gartner audit.
Ooops, yes, bad typo in the circumstances - but it is now fixed and documented (thus making the comment into an "error" or at least an anachronism).
Finnbar makes some good points - especially about error handling. And even if you have some, how thoroughly is it tested? Not as huge a problem in the old days of ACID transactions, perhaps; but I think that missing or untested "compensating transactions" in the SOA world may well be a killer...
As for his last point - yes weight of documentation is a rotten compliance metric, and "documents" poor process. It is usually easiest and cheapest to do the job properly - with automated tools that don't produce piles of paper but link tests through to requirements, possibly?
Really, neither should drive development.
The REQUIREMENTS should drive development.
If the resulting system becomes hard to develop because of what its required to do, well, tough.
If the resulting system becomes hard to test because of what its required to do, well, double tough.
In order that you understand that I do support thorough testing I should point out that 'able to function without a significant number of defects' (interpret 'significant number' according to taste, user/engineer/microsoft engineer/manager/microsoft manager) is always a requirement even if so many people always pretend it isn't...
Also, if either of the above cases is true, maybe you've done your requirements badly, and of course getting a correct set of requirements is pretty much the holy grail of development (i.e. highly desirable, but unlikely to turn up any time soon...) but my point stands... its not about making it work for the guys in the middle, its about getting it right for the guys at the end.
Test Driven Development for Agile software projects
I am a strong proponent of Agile software development methodology. I have used vairations of it in several software projects and have been amazed and how well it works.
Test Driven Development is an Agile concept where a developer will design an object, and then develop unit tests for each interface the object presents. The developer then starts developing the code to implement the object and continually runs the unit tests. The development of the object is done when all unit tests pass.
The unit tests are easily added to a suite of tests and can be easily run any time a developer chooses. There are test frameworks available for C++, C#, Java, and many other languages.
The main benefits of Test Driven Developments are:
A definition of Done: The coding of an object is not done until all tests pass.
A definition of the object interfaces. The unit tests define how the object works and what is should do. This can be used by a developer to learn about the object and how to use it. If the tests were designed properly, they may even be used as sample code a developer can use when they want to use the object.
A definition of what still works. Many times fixing a bug in one area can make a bug in another. If all objects have unit tests, the unit tests will break in this situation and one can see immediately the consequences of the bug fix.
Test Driven Development does not reduce the need for SQA and final testing, but it does go along way to reduce the number bugs before the software leaves the developer's workstation.
For more information of Test Driven Development or Agile Methodology see http://www.agilealliance.org/
> A definition of Done: The coding of an object is not done until all tests pass.
Well, yes; and that's lots better than some of the definitions used in the non-agile world ("it's done when we reach the unrealistic deadline the boss negotiated months ago, in return for his latest promotion/bonus").
But it does imply that you have a complete set of tests and how do you define that? You can't really say simply that having some tests for every interface is enough...
As someone else has pointed out, you have to go back to requirements and (I think) a requirements model that can be mathematically tested for completeness and consistency; and validated by the users in business terms,
So, while test driven development is good, requirements driven development is, ultimately, king!
Tough and double tough? The problems with Complex code
While Hobbes is correct in saying that software should be designed for the end user and not for the benefit of development team let us not forget that complex code can impact these very users we are trying to help. If code becomes too complex to properly test then the quality of the software cannot be confirmed. The end result is that end-users will, in all possibility, be unhappy because the software has bugs in it.
By writing quality code from day one it makes everyone's job easier and the project less costly. Why is software testing often inadequately done? Because of the costs associated with it. We run out of time, money, resources.
But here lies the rub. We can throw more testers at the code to improve quality throughout the development lifecycle but this becomes costly very quickly. If automated source code analysis is used from early in the code-writing stage, the testers can get involved at the right time, focusing on the right things not simply picking up on bad code.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Analysis BlackBerry's turnaround relies on a secret weapon: Its own network
- Hire and hold IT staff in 2015: The Reg's how-to guide