In the last article, I added tests to existing code. So I did not really do Test Driven Development. I did Test After Development. Let’s do some TDD now and design the block erase function. I’ll go from the spec, to the test to the code.
The data sheet for the device describes block erase like this:
Block Erase Command
The Block Erase command can be used to erase
a block. It sets all the bits within the selected block
to ‘1’. All previous data in the block is lost. If the
block is protected then the Erase operation will
abort, the data in the block will not be changed and
the Status Register will output the error.
Two Bus Write cycles are required to issue the
command.
- The first bus cycle sets up the Erase
command. - The second latches the block address in the
internal state machine and starts the Program/
Erase Controller.
If the second bus cycle is not Write Erase Confirm
(D0h), Status Register bits b4 and b5 are set and
the command aborts.
Erase aborts if Reset turns to VIL. As data integrity
cannot be guaranteed when the Erase operation is
aborted, the block must be erased again.
During Erase operations the memory will accept
the Read Status Register command and the Program/Erase
Suspend command, all other commands will be ignored.
Typical Erase times are given in Table 7.,Program,
Erase Times and Program/Erase Endurance Cycles.
See APPENDIX C., Figure 20.,Erase Flowchart
and Pseudo Code, for a suggested flowchart for
using the Erase command.
*** Thanks to STMicroelectronics for use of this chart***
From the flow chart it looks like we will need these tests
- Normal case where everything works fine (a.k.a. the happy path)
- Invalid programming voltage
- Invalid command sequence
- Erase error
- Erase a protected block error
Here are a few words on how the read status register works.
Read Status Register Command
The Status Register indicates when a program or
erase operation is complete and the success or
failure of the operation itself. Issue a Read Status
Register command to read the Status Register’s
contents. Subsequent Bus Read operations read
the Status Register at any address, until another
command is issued.
The Read Status Register command may be is-
sued at any time, even during a Program/Erase
operation. Any Read attempt during a Program/
Erase operation will automatically output the con-
tent of the Status Register.
This table is a helpful reference on how the device is operated.
Here is initial happy path test. To keep the test short, we can pretend that the block erase operation is complete after three polls.
enum { eraseBlock = 0x20, blockNumber = 4, flashDone = 1<<7, flashNotDone = 0, vppError = 1<<2, clearStatusWord = 0x50, }; TEST(Flash, EraseBlockHappyPath) { Expect_FlashWrite(0x0, eraseBlock); Expect_FlashWrite(BlockOffset[blockNumber], 0xD0); Expect_FlashRead(0x0, flashNotDone); Expect_FlashRead(0x0, flashNotDone); Expect_FlashRead(0x0, flashDone); Expect_FlashWrite(0x0, clearStatusWord); ReturnType result = MyFlashBlockErase(blockNumber); LONGS_EQUAL(Flash_Success, result); Check_FlashWrite_Expectations(); } |
BTW I’m ignoring the timeout possibility in this article. The
code to make that test pass looks like this:
ReturnType MyFlashBlockErase(uBlockType blockNumber) { FlashWrite(ANY_ADDR, 0x20); FlashWrite(BlockOffset[blockNumber], 0xD0); while (((FlashRead(ANY_ADDR) & 0x80) != 0x80)) ; FlashWrite(ANY_ADDR, 0x50); return Flash_Success; } |
The next decision in the flow chart is about detecting and reporting
incorrect programming voltage during block erase. Here is the test:
TEST(Flash, EraseBlockVppError) { Expect_FlashWrite(0x0, eraseBlock); Expect_FlashWrite(BlockOffset[blockNumber], writeEraseConfirm); Expect_FlashRead(0x0, flashDone | vppError); Expect_FlashWrite(0x0, clearStatusWord); ReturnType result = MyFlashBlockErase(blockNumber); LONGS_EQUAL(Flash_VppInvalid, result); Check_FlashWrite_Expectations(); } |
If I was worried about how many reads it took to detect the Vpp problem I could add some more Expect_FlashRead(0x0, flashNotDone)
calls. Knowing the implementation of the driver function I an not worried about it.
I made some bit twiddling mistakes while trying to get this test working, nicely the tests caught my mistakes.
ReturnType MyFlashBlockErase(uBlockType blockNumber) { FlashWrite(ANY_ADDR, 0x20); FlashWrite(BlockOffset[blockNumber], 0xD0); uCPUBusType status; do { status = FlashRead(ANY_ADDR); } while ((status & 0x80) != 0x80); FlashWrite(ANY_ADDR, 0x50); status &= ~0x80; if ((status & 0x08) == 0x08) return Flash_VppInvalid; return Flash_Success; } |
According to the spec, the device can generate a command sequence error. This test does not try to make an actual sequence error, it just makes the fake IO port return the command sequence error bits and makes sure that the driver detects it.
TEST(Flash, EraseBlockCommandSequenceErrorDetected) { Expect_FlashWrite(0x0, eraseBlock); Expect_FlashWrite(BlockOffset[blockNumber], writeEraseConfirm); Expect_FlashRead(0x0, flashDone | commandSequenceError); Expect_FlashWrite(0x0, clearStatusWord); ReturnType result = MyFlashBlockErase(blockNumber); LONGS_EQUAL(Flash_BlockEraseFailed, result); Check_FlashWrite_Expectations(); } |
The Device error test, and the write to a protected block test are basically the same as the prior test, so I’ll leave them to your imagination. Here is my finished block erase function.
ReturnType MyFlashBlockErase(uBlockType blockNumber) { FlashWrite(ANY_ADDR, 0x20); FlashWrite(BlockOffset[blockNumber], 0xD0); uCPUBusType status; do { status = FlashRead(ANY_ADDR); } while ((status & 0x80) != 0x80); FlashWrite(ANY_ADDR, 0x50); status &= ~0x80; if ((status & 0x08) != 0) return Flash_VppInvalid; else if ((status & 0x30) != 0) return Flash_BlockEraseFailed; else if ((status & 1) != 0) return Flash_BlockProtected; return Flash_Success; } |
I have no idea if this will work on real hardware. But I do know the driver is doing what I expect it to do. I made some coding mistakes while I worked that I would otherwise have had to Debug On the Hardware (DOH!). I avoided DOH! for those mistakes. I expect that alone has paid for my effort to write the tests.
Next time we’ll compare my block write with the one put together by the pros and see what mistakes I made.
Oh yeah, here are the final set of enums too.
<
pre lang=”C”>
enum {
eraseBlock = 0x20,
writeEraseConfirm = 0xD0,
blockNumber = 4,
flashDone = 1<<7,
flashNotDone = 0,
vppError = 1<<3,
commandSequenceError = 1<<4 | 1<<5,
eraseError = 1<<5,
writeToProtectedBlock = 1,
clearStatusWord = 0x50,
};
Pingback: Test-Driven Development for Embedded Software « Gerard Meszaros’ Weblog
I must admit, although being doubtful about software only testing of hardware interaction layers, that this post shows that it is possible to test a piece of hardware interacting software code . It shows testing program flow (functional) and even incorporating the state of the device (next blog called “Who says you can’t test drive a device driver?”).
One of the things to keep in mind here is how wide a hardware-interface is and consequently the amount of work for simulating the interface. There is an economical break-even-point when to change from full software only testing to unit testing on hardware.
Also, often I experience the fact that exact hardware behavior is not always known, due to lacking specifications or by “obscure” drivers in between.
Other aspects I am curious to see in your next blogs is how you deal with multithreading and synchronization aspects. Another common problem is often related to timing and the fact that hardware tends to show variations whereas the software layer is quite “rigid”.
Please keep up the good work in finding answers to the questions common to TDD/Unit testing of embedded software.
Hi Gernot,
I don’t mean this to be the only testing done of the hardware interaction layer. I would use this to test drive the code in the friendly confines of my development system. In this example, I worked from the manufacture’s specification. You definitely want to run this code in the real hardware at the first chance. No doubt there would be surprises. When I compared my implementation with the reference implementation from the manufacturer, I found reads and writes that were not in the spec.
Getting parts of the driver working to my understanding of the spec, narrows the problems that I might have when my code meets the hardware. I probably won’t have many silly coding mistakes. The code will do what the tests confirm it does. But I do expect to find mistakes in my understanding of how the software interacts with the hardware. I think its pretty cool, unit test (programmer tests) tell me that my code does what I think it does. Testing in the hardware tells me that my code meets the requirements.
When changes are made to make the code work in the hardware, the development system tests are changed to match the needs of the hardware, locking in the good behavior. As maintenance continues, there is a good chance that problems will be introduced. But the development system based tests, if they are any good, will catch those side effect defects. Changing driver function A should not change driver function B.
I’ll get something on concurrency in a later post. But unit testing is not really geared to finding threading and synchronization problems. You can’t prove with unit tests that there are no concurrency problems. The first part of the advice is to separate the threading and synchronization logic from the application logic. Test the application logic for correct application behavior. A higher level load test is the most helpful at detecting and finding threading and synchronization problems.
James
Hi James,
Thanks for the clarification. If I understand correctly the TDD test are defined for verification of requirements that can be tested in the scope of the development machine. It helps for early finding of bugs. Looking at the way we use TDD in our in-product-software is that we use the test to think of how we will implement the production code.
You also show this in your article. However what I think is that multi-threading, synchronization, timing and such have to be designed in/implemented right of the start. This would suggest I want to deal with these issues in TDD.
A total different aspect I experience is that the software interacts with hardware via a driver that is delivered with the hardware. Typically these drivers have a very wide interface. Preferably you wouldn’t want to write mocks for such a wide interface. Can you give hints on ways to deal with this?
Gernot
Regarding concurrency
I understand the desire to get threading built in from the start. That said, the first concern should be to keep your design modular. It is possible to keep separate the concerns of the application from the concerns of concurrency. Modularity is the key.
If you keep things modular and then you find that some aspect of the threading model is not correct, it will be easier to change. Mix application logic and threading too tightly together, and adjusting the concurrency model gets a lot harder.
Look to designs that rely less on shared data and more on message passing, or queued access to shared resources. This helps to isolate the parts of the code that care about the concurrency primitives, increasing the odds of getting it right. Doug Schmidt did a lot of good work on this in C++ for the ACE framework. I would use TDD to test my threading and synchronization constructs, but I would keep threading and synchronization as separate as possible from application logic. The app is complicated, concurrency is complicated, put them together and multiply the complexity, keep them separate and the overall complexity is additive.
On development system unit tests you should probably mock out the concurrency calls. Say your code wants to lock some resource using a mutex. With a mock, you can test that the right number of locks and unlocks happen, you could test the order to guard against deadlock. But for the most part the tests are single threaded. There will be a chapter in my book on this.
Regarding mocking wide interfaces
How wide is the interface? Can you provide an example? Does the application need the whole interface?
My friends at Atomic Object have a tool called CMock for generating mocks. You could go that way.
I would look for design solutions as well. A layered approach could be used to provide a higher level, easier to use, interface for the application to access the hardware services. It will be easier to mock a higher level interface. To connect the higher level interface to the driver you write an adapter that takes your higher level calls and translates them into what the driver wants. This has all kinds of advantages. Better modularity, protection in application investment from hardware changes, portability…
For example, a file system is a higher level interface than having applications directly accessing sectors on a hard drive. If there were no file system, there would be a lot more complexity in code that needed to save and retrieve information on the hard drive. If the apps used a lower level interface there would be increased complexity, leading to more difficulty in testing, leading to tightly coupled designs. Each new hard drive might require application changes.
The adapter might get you right back to the same problem of testing the code that uses the driver. Maybe the code using the driver can be thin enough to only need manual tests. More likely I would use an approach like the flash driver where you set the expectations on the IO read and write operations. You might not know exactly how the driver works, but you can discover the workings. Any unexpected read or write will cause a test failure. Once you see what write the driver does, in response to some call to the driver interface, you can add an expectation. This captures the de facto behavior in the test.
Hi James,
Sorry for the late response (holidays).
About the interfaces these are fairly wide like 200 plus functions or more. Besides that, as you already hinted, is the behavior of the interface never fully known. We use often hardware where we get drivers built by the vendor. What indeed is the way to go (I guess) is the ” in between layer”. I agree it should be small but it is the same approach I am hearing from collegeas in-product-software developers.
Furthermore I indeed support your statement that the architecture should take care of modular designs. It offers indeeds the benefits for how you do your tests, the amount of modeling, reliability etc. As well it reduces complexity and that’s also why I like TDD. My experience is that using TDD forces you to limit the number of dependencies in the code because you will have to provide a mock/stub for each dependency (thus extra work). However after applying TDD for some time (large scale development projects approx 1Mloc) we see that the TDD code (and xUnit code) grow to substantial amount (40 – 60 % of the total code produced). This automatically brings a maintenance burden. What is wise to do then? Should some of the TDDs be pruned? e.g. the ones that focus on classes in the “middle (no interfaces with external code)? This is not dedicated to TDD development for tech apps of course but perhaps you have a hunch for me any way. Do you know of such large scale developments that use TDD for several years?
I appreciate your answers.
Kind regards,
Gernot Eggen
Pingback: James Grenning’s Blog » Blog Archive » Why Test Driven Development for Embedded?