I’m in agreement with Robert Glass when he says “100% test coverage is insufficient. 35% of the faults are missing logic paths.” It’s not controversial, but I’d like to give my perspective on it.
If you have an automated unit test suite, low code coverage is an indication that you need more tests. Unfortunately, high code coverage does not tell you if you have enough tests or the right tests. Adding to Robert Glass’ observation, executed code is not necessarily tested code. Imagine a test case that runs through many lines of code, but never checks that they are doing the right thing. At best this is the “I don’t have any bad pointers” test.
Some silicon vendors extend the C language so the programmers can easily interact with the silicon. Using these extensions tie production code to the silicon vendors compiler and consequently the code can only run on the target system. This is not a problem during production, but is a problem for off-target unit testing.
The good news is that we may be able to get around this problem without having to change production code, one of our goals when adding tests to legacy code.
It’s day one of adding tests to your legacy C code. You get stopped dead when the compiler announces that the code you are coaxing into the test harness can’t be compiled on this machine. You are stuck on the Make it compile step of Crash to Pass.
Moving your embedded legacy C code (embedded C code without tests) into a test harness can be a challenge. The legacy C code is likely to be tightly bound to the target processor. This might not be a problem for production, but for off-target unit testing, it is a big problem.
For C we have a limited mechanisms for breaking dependencies. In my book, I describe at length link-time and function pointer substitutions, but only touch on preprocessor stubbing.
In this article we’ll look at
#include Test-Double as a way to break dependencies on a problem