Test Driven Development
Lately, here and then, I'm hearing a lot about the so called Test Driven Development as a development methodology.
Let me explain how this works. The methodology is based on having the test drive the development of the application (hence the name), so that, our first step is, as usual, plan what the application needs to do, and then develop test for each one of the functions we are going to develop, so that, this way, will have a test for each function (don't get to excited with that...)
Next step will be obviously, developing the functionality that will be able to pass the test, and in that way, requirement done and tested ...
The technique description has some other steps and guides over how development must be performed on each step must be. The wikipedia page has more information for those interested (hopefully not many of you after reading this article)
Around this methodology a lot of frameworks have emerged: unit testing frameworks, mocking frameworks (that allow us to create drivers for the functionality which isn't still developed), Inversion of control frameworks (that intercept certain method calls and allow us to insert our code in certain places).
These frameworks are extremely helpful, in its way, to help us test our classes and our libraries. They allow to easily develop tests, automatically test that certain things are working and from a general point of view, help the developer a lot.
But back to the test driven development...
If you stop a minute to just think about the name itself, just the name, a warning red light should switch on. "Test Driven" delopment? really?
A test is something you use to confirm something is working as it should. When in doubt we test. That's the purpose of a test.
So, how can development be driven by the tests? If we really thing about it we are saying something like:
I have to program a class in charge of compressing a data stream. Instead of doing that, I'm going to develop a test and then I will develop the class itself.
Somethings, when said faster enough can sound perfectly right but I can't stop hearing the old: "if you work for a living why do you kill yourself working".
The first problem of TDD is mentioned in the wikipedia page: "Test-Driven Development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations". Well, most of todays applications work against a database, from the smaller ones to the larger ones so that's a pretty big restriction.
We find the second problem when we start working in the real world, not in the fictional world where some academics seem to work. In the real world we have milestones, assigned resource and above all, in the real world, there's money involved.
One of the most important lessons I've learnt from the business world is that in development, this we work on, is a business, things should be done as best as possible but never forgiving this is for money. That, among other things, means that if there's no product, you don't get paid, so developing a thousand tests in order to make the perfect product with the perfect methodology will not get you anywhere when the product isn't complete for the milestone.
Another fact about the development world is that dates tend to be tighten up the most the better (some times too much), that way, when trades agent is negotiating with the client, and it's usually people without a deep understanding of the low level of what they're talking, they can't exactly estimate (nor are they expected to) how long the project is gonna take, and even if they are able to do it or have the perfect set of information from the developer side ... the important thing is making the sell. If you have a two million dollar contract, for example, you're not going to throw it away just because the client wants it two months earlier, you sell it first and then we'll see. After all, in most cases the client will prefer to give you another to months to finish it than throw away two years of development.
Given that, the fact that you have to waste a precious time developing test before even starting to develop is a no sense. I something that should take 50 hours takes 65 because we expend 15 developing tests may not look like a great deal but if we thing about a 7000 hour project we will be using 2100 hours developing tests, that is, a man per year making tests.
Be careful, even if we implement unitary tests for everything, that doesn't mean we don't have to carry out the rest of tests, that is, integration tests, system test, etc. And it doesn't guarantee that everything will be ok either.
On the other hand, developing test, no matter what help we have from this or that framework, can be prone to errors too; after all programming is programming no matter what you are programming. Should we make test for the test as well? By the way, have I mentioned the extremely hard (almost impossible) task which is developing unitary tests for multithreading applications?
Of course, I can only talk from my experience, although I haven't still found anybody successfully using this methodology in a big project, and in my experience, test are developed to fix what is failing or things you're in doubt about, that is, you develop a test when you need it, not as a general rule. And for that, frameworks like nUnit are really useful.
In conclusion, the most innovative thing is not always the best thing to do, having an entry in the wikipedia doesn't mean it will work neither does the fact that a lot of books are published. Develop test for those things that need to be tested. I usually follow this guidelines (and my common sense):
- There is an error and you know what class is raising it or what sector of code the error is (or so you think)
- You're not so sure about the correctness of a code section and the development of a test is less complex that visually inspecting the code or debugging it.
- You are looking for a memory leak in the code.
- You're developing a class library. In this cases is extremely useful to develop tests for the external interfaces of the class. Class libraries are optimized and modified often so it's very useful to be able to check easily that the programs that were using it will continue working after the changes, that is, make sure we haven't messed up with the update.
In the real world is unfeasible to make tests for everything, it takes too long and it's just unrealistic.