This is the second article addressing the misconception that TDD ignores design. In the previous article, I explained how TDD acts as a design rot radar. In this article, I’ll explain why I think TDD also acts as a homing beacon for well structured code.
Here’s my claim, TDD guides you to improved designs.
The solution to code rot is continual refactoring and the application of good design techniques embodied by the SOLID design principles. You can’t avoid applying the SOLID design principles knowingly or not when doing TDD. The Single Responsibility Principle suggests keeping responsibilities together, which is natural when writing test and code that match. As a module grows in responsibility, the tests show it. Single actions have many side effects. Tests grow in complexity. Setup becomes more difficult. Checking that the right things are happening does too.
Let’s say you have code that does its own date manipulation buried in some business logic. That means for throughly tested code, the date combinations and business combinations together increase the number of scenarios that must be tested. If the date manipulation was extracted into its own module, testing date manipulation directly is simplified. Tests for date dependent business logic is also simplified once the embedded responsibility is removed due to fewer combinations. The tests for the date manipulation are singly responsible for date manipulation. The business logic will be more focused on its responsibility. Consequently both modules are more focused on their responsibility.
When you find code that has a problem dependency, the discipline (or addiction) to write a test will mean that you need to break that dependency by using a test-double. When you swap in and out test-doubles, you are using the Open-Closed, Liskov Substitution, and Dependency Inversion Principles. It is a natural improvement yielding loosely coupled code.
When your code depends on some third party code, or code from some other subsystem or group that causes testing inconvenience, you have a choice to make. Should I create a test double for the problem dependency, or write an abstraction layer above the problem dependency. Given new code, the abstraction layer, or adapter, is usually a good choice. This choice limits the knowledge of fat interfaces and gives you more freedom to choose a different third party in the future. Your thin layer is a form of Interface segregation. You can also look at the layer as an application of DIP, shielding the higher level code from the details of the third party code. It also provides an flexibility point for applying OCP, LSP during tests and when you decide to choose another third party in the future. TDD guides the design.
This cause and effect diagram shows the relationships and how TDD leads to better designs, if you listen to the code and do the refactoring.
All the SOLID principles are in play, whether you were schooled in them or not. This happens when you insist on having well structure code and tests.
You don’t get this if you don’t refactor, but then you are not doing TDD. In refactoring there is the conscious step of looking for code smells and removing them. Refactoring centers around duplication removal and improving names. The duplication removal activity starts by creating useful helper functions, but often also to extracting new modules with their own responsibilities. Again applying SRP. Tested code has substitutable parts that have the attributes of a design using OCP, LSP, ISP and DIP.
My premise from these first two articles is that TDD is radar for approaching code rot, as well as a guiding beacon to modular design. TDD does have a code focus; so you may be wondering about the big picture. I’ll talk about that in the next article.