Perhaps one of the most interesting things about TDD is not the specification-oriented and design-centred role in which testing is employed, but the amount of explanation it requires as a term. And I don't just mean expanding the abbreviation to Test-Driven Development or Test-Driven Design, as opposed to, say, …
It's called Design For Test. DFT has been around in the semiconductor world for close to 20 years now. Before a single line of BIOS is executed, a microprocessor will execute it's own BIST. Software is a long way away from being as sophisticated as the hardware it runs on.
The BIST is not enough
It's late, and the bottle is dangerously close to half-empty so let's rant.
You *can* get software with "built-in self tests", if you lay down the dollars but what do these do? I have never done hardware design but I do think that hardware has the advantages over software of far smaller state space and far higher locality (I'm not talking about fine-tuning impedances and propagation delays here). So you can probably test the various hardware subelements in the same way that you would run the test cases on a library of mostly side-effect free functions. But no BIST procedure is going to give you any assurance about a program that interacts with the user, slurps in third-party classes, runs on a flaky OS, talks over the network, gets stuff from a potentially bad database etc. What might save your program, apart from clean interfaces and clean code, is: well-though out exception handling, software rejuvenation (throwing away you run-time datastructures and starting from scratch if an error occurs), lots of runtime precondition and postcondition checking (through 'assert' for those who don't write in Eiffel). This will keep you on a meaningful trajectory through those parts of the state space where users are not being surprised.
Regarding software testing I never could make friends with "writing tests before the actual function". I have noticed that I you write the test code _alongside_ (not before, not after) your actual code, you really think differently about the interfaces your classes present to the outer world. You try to keep them neat and clean so that you understand them yourself. You leave out any unneeded complexity and generality because you _will_ have to test it, which is a pain. You write in fine-grained modular fashion as otherwise there is no way your code and your tests can properly interact. And last but not least, you put yourself into the place of the caller of your code, which quickly shows where confusions, unstated assumptions and ambiguous requirements lurk. Which makes you add more asserts.
testing first, then TDD
"One of the common misinterpretations of TDD is that it is no more than getting developers involved in testing."
That may be so, but I believe that it's necessary to get developers involved in testing -- writing tests (unit tests and otherwise), running them, and so on -- and let them get a certain level skilled and comfortable at it, before going all TDD.
A lot of developers' idea of "testing" is still: code a bunch of stuff, then run it and eyeball what happens; if it crashes or looks weird, go debug. That's not really testing at all in the sense that it's meant in the context of TDD. To jump from "no testing at all" to "the tests drive your work" seems highly unrealistic -- you're asking them to let the tests *drive* before they're even familiar with what you mean by "tests." You've skipped that step.
Introducing real testing already begins to have major benefits. Once testing is established, you can talk about making the tests the driving force in development.
- NASA boffin: RIDDLE of odd BULGE FOUND on MOON is SOLVED
- Pic Mars rover 2020: Oxygen generation and 6 more amazing experiments
- Microsoft's Euro cloud darkens: US FEDS can dig into foreign servers
- Boffins spot weirder quantum capers as neutrons take the high road, spin takes the low
- Plug and PREY: Hackers reprogram USB drives to silently infect PCs