When TDD goes red

This article was first published on August 9, 2007, at Dotnet @ Kape ni LaTtEX

The past four days have been particularly grueling: I grossly underestimated the singleton abuse I've been fixing in the company application. In the end I ended up revising more than 500 compile errors. There are 330 compile errors left, but I gave up on them.

Why?

Two reasons: the errors were already those in the unit tests, and because these tests have, themselves, abused the singleton that I was fighting against.

Facing this, I started to think the use of Test-Driven Development has, to some extent, failed in this project. I hope I don't offend some previous project members (I absolutely mean none), but, here are my insights as to why TDD failed here:

TDD was treated as a task, when it should have been treated as an an approach. Enforcing TDD on those who haven't heard of it or find it preposterous and expensive makes the affair an uphill battle, and once the enforcement stops (e.g., its proponents leave), the other team members regress into easier, test-less or test-last coding.

There was also a failure to recognize that TDD is not about tests, it's about design. The rampant case of singleton abuse in the unit tests made this obvious: instead of the test writers thinking "WTF are these singleton = value; statements doing in my tests?", the test writers just propagated the singleton into the tests. 330 times.

The unfortunate consequence is that the build server-enforced testing was made to pass, whatever it took. As I went through the test classes, many tests were either commented out or placed with an Ignore attribute. We had "Deadline Driven Development" written all over the comments, literally, often as justification for each Ignore. Unfortunately, we don't know what to do with these tests anymore, because we have no idea if the tests are up to date with our current requirements.

However, I'm not giving up on unit-testing entirely.

I'm proposing to our lead to start on a clean slate with regards to the tests, and move towards a Behavior-Driven Development (BDD)-like approach, wherein we're going to write tests to check if we fulfill our requirements. That is, we will enforce writing tests to check whether the code is doing what the system requirements say it should. The nitty-gritty won't matter, for the meantime -- no need to test each and every method.

This is definitely not genuine BDD (we don't exactly have user stories -- system requirements can never be equivocated with user stories) but I hope it will make it easier for the team to appreciate the tests.

I also hope that it is a step towards the right direction.

Jon Limjap

Microsoft MVP for Visual Studio and Development Technologies |Technical Advisor at PageUp | Philippine .NET User Group Lead | Photographer | Scale Modeler

Manila, Philippines