TDD ignores design; that is a frequently stated misconception. Many people get this idea from the code focus of TDD. TDD does not call for the creation of any non-executable design documentation. So the questioning developer gets the idea that there is no design. But I say, “yes there is”.
TDD’s lack of narrative text or UML documentation should not fool you into believing that there is no design in TDD. There is plenty of design; design is continuous in TDD; design is not an event that has an end (unless you’ve stopped supporting the product). Though it is accurate that skilled test-driven developers do not generally create a lot of detailed up-front design. But there is plenty of detailed design documentation created by the skilled test-driven developer: the tests and the well structured code. The working example usages found in test cases is an excellent form of detailed documentation.
The activity of TDD is not passive about design; it detects design problems; it guides to potential solutions. In this article I’ll describe how TDD is design rot radar. In the next couple posts, I’ll show how TDD is a guiding beacon for good design, and how TDD relates to the big picture or design vision.
Design Rot Radar
Design is essential. It is so important that TDD practitioners don’t just do design at the beginning of a development effort. They do design every day. TDD encourages good design at the detailed level through encouraging loosely coupled modules with high cohesion. These are a natural outcome when you test-drive your code. Let’s see how.
Your “well designed” systems that have spawned one thousand line functions could have benefitted by TDD’s code centric view with its design rot early warning system. Remember back when those functions and classes were reasonably sized?
As your code began to grow, you would have noticed that the candidate code changes would be hard or impossible to test. This would happen long before the code is carelessly transformed into out of control thousand line functions. The “I can’t test this” blip on the radar is a warning of code problems to come.
Unlike the Not Invented Here feedback from the late-cycle review, test-driving gives you unambiguous design feedback in real-time. In time to do something about it well before it is too late. You just have to listen to the code and tests.
You see exactly where you could put the code for your current change, but you cannot get the test to exercise it. You have an urge to access the code’s private parts. It’s time to extract a new module or class.
As dependencies grow, tests become harder to write and you find odd relationships staring at you from the test cases. This test-perspective is telling you of unmanaged dependencies or growing responsibilities. The skilled test-driven developer would refactor the design. Maybe they wall off some dependencies, creating a new interface, a more abstract design. Testing becomes simpler and the design more modular. Also, the test-perspective makes you consider the ease of use of an interface, long before a specific implementation can pollute the API with low level details.
Notice I said the skilled test-driven developer. It takes time to learn how to design the tests and listen to what the tests and code are telling you. You can wean yourself off dependence on too much design up-front. Do your design up-front, but then test-drive its implementation. Choose specific usage scenarios or test cases to build a walking skeleton of the design. Keep track of up-front design decisions that you later had to toss out or revise. Next time try a little less up front design. See if you can convince yourself of what many test-driven developers have convinced themselves of.
It is natural to be concerned that if you don’t get it right the first time, changing will be difficult and dangerous. Given the high cost of manual retest, you would be right to think it difficult, dangerous and irresponsible. But TDD changes that. TDD is designed to support code evolution. The parts that don’t need to change are kept from accidentally changing by the safety-net made of tests.
But make no mistake about it, it you can’t get your code into a unit test, it’s not well designed.
In my (often wrong) opinion, TDD is about design and communication. I use examples (aka tests) to drive my design and when I complete a feature those examples expand my automated verification harness.
Something I find interesting is that TDD helped me change the way I code into a learning exercise. When I implement a feature, I code 2-3 different prototypes, driven from examples (very often hacking the production code, but I revert all the changes after each prototype). After learning from those prototypes, I do a final revert and do the real thing. Each prototype might take from 10 minutes to a couple of hours, but the final version usually takes 1/2 the time of the prototype and is usually way more flexible an well structured than any of the prototypes.
This sounds quite sensible Augusto.
“You have an urge to access the code’s private parts. It’s time to extract a new module or class.”
If TDD really takes us to code for smaller collaborating classes that would be great. Altough I have also seen lots of subclasses in test classes that open a class under test by delegating method calls to access internal/protected methods.
Should we consider that bad practice?
Hello Melle
That is a good question. Subclassing for test is certainly OK in a legacy code situation, as well as for abstract classes. I am more in favor of testing through public interfaces and not letting tests know the internals directly. Private internal parts are usually created for code readability. Given this, they are covered through the public interface. When this startes to break down, in the form of needing access to private parts, the code is asking to be refactored.
I am reluctant to say never do it, as any decision is judgement call. Others may disagree. But I do take the need to access private parts as a sign of growing complexity and a likely violation of single responsibility. In the legacy code situation, don’t forget the last step of a legacy code refactoring is to refactor. Permanent subclassing for test is a code is a smell, whereas temporary is fine. Let me add that having the choice between no tests and tests with smells, I’ll take the latter.
thanks for the question, James
It should not be surprising that testability is a design requirement. Virtually every other industry has already discovered that in order to ship products those products must be designed to be tested. Ours is one of the last hold-outs.
A module that is hard to test is poorly designed because testability is a design requirement. Creating systems that are hard to test is irresponsible at best.
Pingback: “TDD ignores the design” “No it doesn’t” “Yes it does” – Part 2 | James Grenning’s Blog
James
Great blog. I consider myself some way down the road of TDD, but haven’t yet reached enlightenment. My current feeling about design in relation to TDD is most easily described through example.
Imagine you need to write a service that takes a string arithmetic expression and returns the calculated result. It’s trivial to write a test for that, but instinctively, in code terms, you know you’ve got some hoops to jump through. Now it’s here that I don’t see the point in allowing TDD to drive the design – yet.
In order to solve this problem, in a clean flexible way, you will want to apply some compiler theory, a bit of lexing, parsing, construction of a tree, and finally the calculation. I want to do that design process outside of a TDD cycle – I can’t see how starting with a trivial test, and making it pass straight away, gives me anything useful (an end to end test, on the other hand yes, absolutely). I need some design, some theory, algorithms, a high level view of where I’m going, rather than flying straight into a test hoping for the best. How would TDD ever drive out the visitor pattern? Or a precedence climbing parser?
The TDD cycle would start for me with the lexer, then moving component by component, until my trivial end-to-end test passes. A kind of interweaving of on-paper high level design and lower level TDD to fill out the code. You seem to talk about using TDD from the very highest levels of design all the way down to the details.
What am I missing? Am I simply not ‘trusting the tests’ enough? Do you kind fo get what I’m driving at?
Matt
I see you have a vision of the design. That is what I do as well. Thought I think I would have a tokenizer and an operand stack and value stack. I’m no compiler writer though.I pick some place to start; i think of it as inside out, ignoring UI and breaking dependencies on outputs so they can be intercepted.
I would write a test list, my likely fist two tests are:
Those two tests let me get the API to where I liked it. There would be one class/module, with little code.
These tests would make me generalize the idea of an operator:
These tests would make me introduce multiple operations with same precedence
These tests would make me introduce operator precedence
As the core of the expression evaluator became more capable, you might have extracted a stack (or maybe you were using an already tested stack from a library)
There would be other tests as well like
I am not sure that this example is the illustrating what you are looking for. If you look at the LightScheduler example from my book it walks from design vision, to first test and beyond.
One of my points in these two articles is that when you test drive the tests may start telling you of design problems you did not anticipate. Like operand stacks and value stacks are just stacks, but it is useful to ask the operand stack of the precedence of the top operand when deciding if a new operand should be pushed on or evaluated.
It is not that the tests/code design themselves. There are just signs that reveal problems that should be addressed (Code Rot Radar). If you address them using duplication reduction and good naming, then your design should become more SOLID.
Pingback: Accessing static Data and Functions in Legacy C — Part 1 | James Grenning’s Blog