During my TDD session at the Embedded Systems Conference yesterday, I did a demo. Before the demo, I make the case for TDD as a way to prevent bugs (see Physics of TDD). For the live demo I usually code on my mac and run the tests there as well. The question always comes up: “You are running tests on your PC, can you run them on the target?” or maybe “Sure you can TDD on a PC, but what about the real hardware?”
Usually this question from the crowd comes at exactly the right time for me to introduce the topic of the Target Hardware Bottleneck. Running code in the target is expensive. And that cost may run from priceless (before there is hardware) to just a insidious waste of time. Here are some of the problems and wastes associated with unit testing, manual or automated, in the target. In your development efforts, you may experience some of these realities.
- There is no hardware, it’s being concurrently developed.
- The hardware is expensive, so team members have to share it.
- Debugging on Hardware (DOH!) is slow, with tools that may be less capable than the development system native tools.
- The hardware has bugs, as if DOH! isn’t slow enough; buggy hardware makes it extra difficult.
- Building code fore the target is slow.
- Downloading to the target is slow.
- The embedded processor is probably slow too.
Usually, I just wave my hands and talk about these problems. Appealing to people’s common sense and a plausible logic chain. I did the hand waving as usual, but added something to make it real. I brought my whole embedded test kit with me. I had my old Sony laptop, ARM9 evaluation board, JLink/JTag hardware debug probe, and necessary cables.
(Before the start of my session I had 15 minutes to setup and I needed every second of it. The AV tech is trying to wire me with a mike; I’m plugging everything in. T-2 minutes the output window from the ARM9 target and my mac terminal window are not syncing. Starting to worry… but At T+1 minute I was ready to go.)
After the question about testing on the hardware, I started my demo. First showing the feedback cycle on the development system: a simple change to the code, then the chord apple-shift-S to trigger the make+test. –Fast Feedback– all tests passing in less time than a deep breath, about two seconds. You can see from the output below that the tests themselves run in 2ms.
.................................................. ................ OK (56 tests, 56 ran, 61 checks, 0 ignored, 0 filtered out, 2 ms)
Now for the embedded cross-platform tool chain, the rodent is needed so hands off the keys to click – wait 16 seconds – click again – wait – tests completed about 16 seconds later.
Run Tests .................................................. ...... OK (56 tests, 56 ran, 161 checks, 0 ignored, 0 filtered out, 15550 ms)
32 seconds for feedback might not seem like that long to wait, but it will only get worse. 56 tests are not a lot. Even if build time and load time are constant, and fast enough the 7000:1 growth of test time should be a concern. Granted, your mileage will vary. My PC is old and slow, my target and tools are what they are. My mac is not the latest. You should measure for yourself. But 32 seconds seems like an eternity when you’ve grown accustom to 2. You’ll start to not run the tests as often, and then you will find yourself debugging.
TddCycleTime = BuildTime + LoadTime + TestTime.
The BLT time needs to be under 10 seconds. At least for my environment, target only TDD is not practical; the development system is needed to get BLT time down and developer efficiency up. What is your BL time while manually debugging? Are you wasting time? Seconds, minutes, or hours you will never reclaim!
The moral of the story… make code testable on your development machine. But you must test on the target periodically to mitigate the risks of development system testing. These risks include:
- Compiler feature differences
- Compiler bugs
- Library differences
- Library bugs
- Endian issues
- structure packing
When you run on the hardware, you don’t want to find problems that unit tests could find. Save your time for tracking down the hardware integration problems.