Test-Driven Development

Skip to end of metadata
Go to start of metadata
Introduction

Test-Driven Development (TDD) is a software development technique consisting of short iterations where new test cases covering the desired improvement or new functionality are written first, then the production code necessary to pass the tests is implemented, and finally the software is refactored to accommodate changes. The availability of tests before actual development ensures rapid feedback after any change. Practitioners emphasize that test-driven development is a method of designing software, not merely a method of testing.

Test-Driven Development is related to the test-first programming concepts of Extreme Programming, begun in the late 20th century, but more recently is creating more general interest in its own right.

Along with other techniques, the concept can also be applied to the improvement and removal of software defects from legacy code that was not developed in this way.

Test-Driven Development Cycle

1. Add a test
In TDD, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented. In order to write a test, the developer must understand the specification and the requirements of the feature clearly. This may be accomplished through use cases and user stories to cover the requirements and exception conditions. This could also imply an invariant, or modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes you focus on the requirements before writing the code, a subtle but important difference.

2. Run all tests and see the new one fail
This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code.
The new test should also fail for the expected reason. This step tests the test itself, in the negative. A "negative test" is something familiar to testers, to make sure a feature fails when it should fail (e.g. bad input data). It typically follows or is "paired" with one or more "positive tests" that make sure things work as expected (e.g. good input data). ("Make sure it works, then change one thing to make it break and make sure it breaks.") The entire suite of unit tests act to serve this need, cross-checking each other to make sure "negative tests" fail for the expected reasons.

This technique avoids the syndrome of writing tests that always pass, and therefore aren't worth much. Running the new test to see it fail the first time is a vital "sanity check".

3. Write some code
The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve and hone it.

It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality should be predicted and 'allowed for' at any stage.

4. Run the automated tests and see them succeed
If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a good point from which to begin the final step of the cycle.

5. Refactor code
Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to removing any duplication between the test code and the production code — for example magic numbers or strings that were repeated in both, in order to make the test pass in step 3.

Further links:
http://c2.com/cgi/wiki?TestDrivenDevelopment
http://msdn.microsoft.com/en-us/library/aa730844(VS.80).aspx
Unit Testing
Refactoring

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.