Paul Watson, of CodeProject fame, recently asked:
Part of an agile development* process is the test first[^] mindset: You write code which tests code you have yet to write. The test aims to fail the code and you evolve your code until it passes. Testing is normally done via automated unit tests.
Is anyone here using this methodology? How do you find it to be in practice?
Also do you have any good, practical examples of it? The descriptions are all spiffy and what but useful examples are hard to come by.
Check out the replies here.
My response was as follows:
I've tried, and I have three problems. The first is with the way I think. I do not think in terms of test first. I think in terms of object graphcs, and so that's what I code first. If I try to write tests first, I haven't a clue what the test should be because I haven't designed the object graph yet.
And there's the corallary--I don't design on paper, or UML, or whatever. I design by coding. Now, 20 years ago, sure, I would design on paper first. But not only is my experience vastly greater, the tools are so advanced that design tweaks are quite quick.
And the second corallary is that architectural problems that do surface would not usually surface during design, because either the architecture is simple enough that it 1) doesn't need to be refactored or 2) is easily refactored. The third case, complicated architecture that needs refactoring, doesn't really surface until later in the development cycle. The reality of development is that software is not 100% defined up front.
So, the second problem is, unit tests add a maintenance burden. I personally prefer to add unit testing after I have done the major architectural refactoring, otherwise the unit tests need to be rewritten as well. So I write the unit tests "in the middle" of the software development process, when the architecture is mostly solid but the internal implementation might still be in flux.
The third problem is that for complex software, NUnit doesn't cut it. For example, I just wrote a unit test that validates a complex client-server interaction. While I can unit test the client stuff and unit test the server stuff, I need ALL the pieces working IN SEQUENCE to test the entire workflow. Ironically, having written the unit tests for each of the pieces, it was only when I tested the entire workflow that I discovered I had a nasty interaction between to of the pieces that were supposedly separate. Thus, I use my AUT[^] tool (shameless plug), even though it's missing many of the niceties, like command line execution, that NUnit has. And writing unit tests when the tests depend on large pieces of the architecture to be implemented and working, well, first off, they're not exactly unit tests, more like workflow tests, but they're equally, if not more, important. Many, many times, it's the specific sequence of actions that breaks the software, which individual unit testing can't test for because unit tests simply do not cover 100% of the use cases.