April 15, 2005
After years of building ad-hoc test harnesses, I finally adopted formal unit testing on a recent project of mine using NUnit and TestRunner. It was gratifyingly simple to get my first unit tests up and running:
Public Class UnitTests
Private _TargetString As String
Private _TargetData As Encryption.Data
Public Sub Setup()
_TargetString = "an enigma wrapped in a mystery slathered in secret sauce"
_TargetData = New Encryption.Data(_TargetString)
<Test(), Category("Symmetric")> _
Public Sub MyTest()
Dim s As New Encryption.Symmetric(Encryption.Symmetric.Providers.DES)
Dim encryptedData As Encryption.Data
Dim decryptedData As Encryption.Data
encryptedData = s.Encrypt(_TargetData)
decryptedData = s.Decrypt(encryptedData)
It's a great system because I can tell what it does and how it works just by looking at it. You can't knock simplicity. The problem with unit testing, then, is not the implementation. It's determining what to test. And how to test it. Or, more philosophically, what makes a good test?
You'll get no argument from me on the fundamental value of unit testing. Even the most trivially basic unit test, as shown in the code sample above, is a huge step up from the testing most developers perform-- which is to say, most developers don't test at all! They key in a few values at random and click a few buttons. If they don't get any unhandled exceptions, that code is ready for QA!
The real value of unit testing is that it forces you to stop and think about testing. Instead of a willy-nilly ad-hoc process, it becomes a series of hard, unavoidable questions about the code you've just written:
- How do I test this?
- What kinds of tests should I run?
- What is the common, expected case?
- What are some possible unusual cases?
- How many external dependencies do I have?
- What system failures could I reasonably encounter here?
Unit tests don't guarantee correct functioning of a program. I think it's unreasonable to expect them to. But writing unit tests does guarantee that the developer has considered, however briefly, these truly difficult testing questions. And that's clearly a step in the right direction.
One of the other things that struck me about unit testing was the challenge of balancing unit testing with the massive refactoring all of my projects tend to go through in their early stages of development. And, as Unicode Andy points out, I'm not the only developer with this concern:
My main problem at the moment with unit tests is when I change a design I get a stack of failing tests. This means I'm either going to write less tests or make fewer big design changes. Both of which are bad things.
To avoid this problem, I'm tempted to take the old-school position that tests should be coded later rather than sooner, which runs counter to the hippest theories of test-first development. How do you balance the need to write unit tests with the need to aggressively refactor your code? Does test-first reduce the refactoring burden, or do you add unit tests after your design has solidified?
Posted by Jeff Atwood
I noticed that I am instinctively doing tests a bit later, when things stabilize to a certain point. I did not plan it to be that way but it feels right.
The problem you are experiencing with test-first versus design flexibility is one of the more puzzling and contradictory aspects of 'Extreme Programing': it is supposed to be opposed to top-down, up-front design, but without such a design, there is no way to write the tests which it also calls for. The solution which I would propose is to add a step that should have been in the design cycle of most products to begin with: writing a full working, but disposable, prototype, or at least component prototypes for the major parts of the design, which is then used to set the final design.
The prototype should be seen as exploratory, and thus should not be held to strict design guidelines; it can and indeed should be written in a different language from the final version. This would not only ensure that the prototype code does not slip into product, it would also allow the programmers to use a more flexible, dynamic language (Perl,Python, Lisp) for prototyping, then re-cast it into a stricter language with static typing, etc.
Of course, prototyping is not often done, especially for relatively conventional programs; however, I believe this is a mistake. While many would argue that their project does not have the time to develop a full prototype, this argument fails for the same reason similar arguments against testing, code reviews, etc.: the time taken in writing the prototype will almost always made up for the time it saves in the final production. As usual, the more that is done to refine the design ahead of time, and the more that is done to minimize errors, the less time is wasted fixing mistakes later - and debugging is invariably more expensive than preventing the bugs in the first place, especially for bugs that get into release.
"it is supposed to be opposed to top-down, up-front design, but without such a design, there is no way to write the tests which it also calls for."
XP and other agile methodologies are generally opposed to top-down, up-front design of the *entire application*. This is different from deriving your domain model piece by piece as you need it. TDD is also very helpful when your design needs to change, for obvious reasons.
The key here is to remember that the TDD advocates aren't saying you should write your tests for the entire application and then go and fill in the blanks later. You move in short sprints dealing with limited chunks of functionality at a time, writing a test for each piece as you go.
I would think that TDD would fall apart if you were to try to utilize it otherwise. It's part of a package, but the other parts of the package are up to interpretation. There are a lot of other bits that must fall into place before it can be useful to you. I sometimes find it hard to wrap my head around it all, but I have seen SCRUM and XP work in real life projects. It isn't just hype, by any means, although the key is obviously to derive a development process that works best for you and your organization.
Oh, and one more thing, Jeff. In your post you say this: "it forces you to stop and think about testing."
I'd change that a bit to say "it forces you to think up-front about what you want your API to actually *do*". The testing part of TDD is great, but I think the benefits are greater than simply gaining a set of unit tests. It's the entire thought process that is the key, and having a well-tested application is just sweet side effect. ;)
The testing part of TDD is great, but I think the benefits are greater than simply gaining a set of unit tests
I didn't go into the details, but I agree-- it's the entire "stop and think about how this will be used" thought process of testing that is so helpful. Based on the unit tests I wrote, I ended up changing my API! In a way, the phrase "unit test" is kind of misleading-- it's really more akin to forcing yourself to use your own code like another developer or user would.. and taking off those developer blinders for a moment.
Bear in mind, too, that unit tests are just a part of an overall strategy. That's basically what Unicode Andy was saying in his post, although I only quoted the part dealing with unit testing.
"My main problem at the moment with unit tests is when I change a design I get a stack of failing tests. This means I'm either going to write less tests or make fewer big design changes. Both of which are bad things."
That quote right there tells me that Andy is probably not doing Test First development. I think the general idea of writing you tests first is that it causes you to really thing about the design up front from an API point of view, and then deal with the implementation details later. Your refactorings shouldn't cause massive test failures unless your changing the API, in which case you've already thought about the design changes and re-written your tests to reflect them *before* you've actually changed the code.
Also, what's wrong with a stack of failing tests? That right there tells me what I have to change, and the likelihood of me forgetting to change something goes down because the big red bar is right there in front of me. Isn't that the whole point?
Yeah, testing up front can be a pain in the ass sometimes. I think that once you get the hang of it, you learn to make small design changes and test them as you go, instead of making sweeping changes all at once. TDD always makes me think I'm going slower, but the reality is usually the opposite, and the benefits are so huge.
So to answer your question, TDD does reduce the refactoring burden for me, anyways.
I think this comes with experience. Ron Jeffries has a series of articles demonstrating several designs which reuses the same tests. How does he do that? It seems to me he has more stable tests because he test behaviour rather than implementation. That is, he test on what should happen, not how it were done. http://www.xprogramming.com
My tests improved dramaticly after I applied this approach. That implies you end up with a front end object which is the api you test against, and a set of objects without direct tests, but whose existence is a result of refactoring.
If you are refactoring (aggressively or not) using the strict definition of the term, you should get relatively few tests failing at any point, since by definition you are not changing the behaviour, only the structure.
Sometimes you also need to refactor the tests, but that's a good thing, isn't it?
Maybe you're just changing too many things between running unit tests :-)
I had similar experiences with my first attempts with TDD. Sometimes you really do have to make that sweeping design change and break all of those maddeningly simple tests you wrote (I was in the thousands). This is a nice painful way to turn you off from test-first. I've run into a partially satisfying solution from Len Holgate's blog:
He calls it just-in-time testing. Get your framework in place for all your classes, write the tests you feel will help you with the design. If something's too simple to bother with, don't bother. When it breaks (as it always does), revisit the tests and improve them. I don't feel very safe this way, and tend to err towards more tests, but it's a nice way to have the ability to add tests quickly as you need them, or as the design flushes out but not necessarily write a failing test for every new line of code.
Another big help is moving object creation (other than the creation tests) to factories. I feel factories are generally overused, but for tests, it's nice to change a constructor or add some attributes and not have to go back to 20 tests and add in an irrelevant argument/method.