In the last article, the OSSemPend()
test-double was coded to handle a specific OSSemPend()
application and test need. The semaphore was being used to signal when there is a message to process. It was the first need for a OSSemPend()
test double and was quickly developed. As more RTOS dependent code is brought under test, a more general solution will be needed.
In this article, we’ll look at a test double that can be customized for each application.
It’s hard to anticipate what every application that blocks on a semaphore is waiting for. I think it is safe to say that it is impossible. So we need each test case that uses the test-double to be able to customize it.
A few generic capabilities that users of the test-double may want are: to look at parameters passed, to simulate output results, and to count the number of calls. That’s all easy. Let’s look at this easy stuff first, then we’ll look at how to customize the test-double to establish application specific conditions that would normally be done asynchronously in a separate thread or ISR that posts to the semaphore.
Here are the tests that document how the generic behavior of the test-double works:
#include "CppUTest/TestHarness.h" extern "C" { #include "ucos_ii.h" #include "uCosIITestDouble.h" } TEST_GROUP(uCosIITestDouble) { OS_EVENT event; INT8U error; void setup() { error = -1; OSSemPend_Fake_Reset(); } void teardown() { } }; TEST(uCosIITestDouble, OSSemPend_sets_error_to_zero_by_default) { OSSemPend(&event, 0, &error); } TEST(uCosIITestDouble, OSSemPend_counts_calls) { OSSemPend(&event, 0, &error); LONGS_EQUAL(1, OSSemPend_fake.call_count); } TEST(uCosIITestDouble, OSSemPend_remembers_parameters) { OSSemPend(&event, 1000, &error); POINTERS_EQUAL(&event, OSSemPend_fake.event); LONGS_EQUAL(1000, OSSemPend_fake.timeout); POINTERS_EQUAL(&error, OSSemPend_fake.error); } TEST(uCosIITestDouble, OSSemPend_returns_what_i_tell_it_to) { OSSemPend_fake.return_this = 4; OSSemPend(&event, 0, &error); LONGS_EQUAL(4, error); } |
Here is the implementation for the OSSemPend()
test-double that remembers parameters, counts calls and lets the caller control the the return result.
#include "uCosIITestDouble.h" #include <string.h> OSSemPend_Fake OSSemPend_fake; void OSSemPend (OS_EVENT *event, INT32U timeout, INT8U *error) { OSSemPend_fake.event = event; OSSemPend_fake.timeout = timeout; OSSemPend_fake.error = error; *error = OSSemPend_fake.return_this; OSSemPend_fake.call_count++; } void OSSemPend_Fake_Reset(void) { memset(&OSSemPend_fake, 0, sizeof(OSSemPend_fake)); } |
The header file looks like this:
#ifndef uCosIITestDouble_H_ #define uCosIITestDouble_H_ #include "ucos_ii.h" typedef struct OSSemPend_Fake { OS_EVENT * event; INT32U timeout; INT8U * error; int return_this; int call_count; } OSSemPend_Fake; extern OSSemPend_Fake OSSemPend_fake; void OSSemPend_Fake_Reset(void); #endif /* uCosIITestDouble_H_ */ |
There is nothing too surprising about this test-double implementation above. It is simple and can be written in minutes. Notice that the test-double includes the production code header file ucos_ii.h
. This provides all the production code defines and function declarations.
If OSSemPend()
is called more than once in a test case, we’ll need to capture a series of parameters and be able to provide a series of return results too. This may be a bit tedious to setup, but its just arrays and indexes. Let’s agree we may need those things, but not now.
Let’s add the hooks to the test-double to allow it to do anything while it is pretending to be waiting for some concurrent activity to OSSemPost()
it’s OS_EVENT
. Let’s see the test:
extern "C" { #include "ucos_ii.h" #include "uCosIITestDouble.h" static int my_custom_OSSemPend_called; void my_custom_OSSemPend(OS_EVENT *pevent, INT32U timeout, INT8U *perr) { my_custom_OSSemPend_called++; } } TEST_GROUP(uCosIITestDouble) { OS_EVENT event; INT8U error; void setup() { error = -1; OSSemPend_Fake_Reset(); my_custom_OSSemPend_called = 0; } void teardown() { } }; TEST(uCosIITestDouble, OSSemPend_custom_ation) { OSSemPend_fake.custom_action = my_custom_OSSemPend; OSSemPend(&event, 0, &error); LONGS_EQUAL(1, my_custom_OSSemPend_called); } |
The custom_action
is a function pointer with the same signature as OSSemPend()
. When provided, custom_action
is called by the test-double implementation of OSSemPend()
. This means we can have the test-double delegate to a function to do anything, allowing it to simulate things happening concurrently.
The header now looks like this:
#ifndef uCosIITestDouble_H_ #define uCosIITestDouble_H_ #include "ucos_ii.h" typedef void (*OSSemPend_FPointer)(OS_EVENT *pevent, INT32U timeout, INT8U *perr); typedef struct OSSemPend_Fake { OS_EVENT * event; INT32U timeout; INT8U * error; int return_this; int call_count; OSSemPend_FPointer custom_action; } OSSemPend_Fake; extern OSSemPend_Fake OSSemPend_fake; void OSSemPend_Fake_Reset(void); #endif /* uCosIITestDouble_H_ */ |
The test-double implementation looks like this:
#include "uCosIITestDouble.h" #include <string.h> OSSemPend_Fake OSSemPend_fake; void OSSemPend(OS_EVENT *event, INT32U timeout, INT8U *error) { OSSemPend_fake.event = event; OSSemPend_fake.timeout = timeout; OSSemPend_fake.error = error; *error = OSSemPend_fake.return_this; OSSemPend_fake.call_count++; if (OSSemPend_fake.custom_action) OSSemPend_fake.custom_action(event, timeout, error); } void OSSemPend_Fake_Reset(void) { memset(&OSSemPend_fake, 0, sizeof(OSSemPend_fake)); } |
When the test case does not need the custom behavior, the test leaves custom_action
set to zero, and is not called.
Let’s see how this would be used to test the MessageProcessor
in a home automation system that turns on and off lights at scheduled times.
TEST(MessageProcessor, schedule_a_light) { input = "sched light 5 turnon Monday 20:00"; MessageProcessor_ProcessNextMessage(); CHECK_TRUE(LightScheduler_Cancel(5, MONDAY, 1200)); } |
The input message is asking to schedule light number 5. The MessageProcessor_ProcessNextMessage()
waits at the semaphore for a message then interprets it and causes the light to be scheduled. The last line of the test tries to cancel the schedule. If light 5 is scheduled for the 1200th minute of Monday, LightScheduler_Cancel
returns TRUE
. if its not in the schedule, it returns FALSE
.
We’re testing RTOS dependent code off the target in a repeatable and productive manner!
Here is the test fixture:
extern "C" { #include "ucos_ii.h" #include "uCosIITestDouble.h" #include "MessageProcessor.h" #include "LightScheduler.h" void InputQueue_Put(char); int MessageProcessor_ProcessNextMessage(void); const char * input; void fake_populates_InputQueue(OS_EVENT *pevent, INT32U timeout, INT8U *perr) { { while (*input) { InputQueue_Put(*input); input++; } } } } TEST_GROUP(MessageProcessor) { OS_EVENT event; INT8U error; void setup() { error = -1; OSSemPend_Fake_Reset(); input = ""; OSSemPend_fake.custom_action = fake_populates_InputQueue; LightScheduler_Create(); MessageProcessor_Create(); } void teardown() { } }; |
I’ve left out some of the application details, but you should have a good view of this really useful RTOS test-double function.
It is rather tedious to build. I need to do this for each RTOS function. Should I show you an easier way?
Hi James,
I think your implementation may be defined as a behavioral pattern. Personally I don’t like to much to have behavioral patterns because developing using BDD the behavior emerges writing the code.
Semaphore pending probably it is at a lower level then the application behavior so it fits better in a unit test written following BDD but this is just a general impression.
In practice when I test situations like this I use a mock to force normal and abnormal situations and I provide some simple log mechanism for the production code to have a report about what really happens during normal execution and I rise higher value log messages when the logger finds a critical or not expected situation (may be too much tasks waiting for the same sema and so on). But this situation is so variable that I didn’t establish a pattern to test it.
Hi James,
As this post is about TDD for RTOS, I have one general RTOS-related question that has always been on my mind, but especially when I read Chapter 11 (SOLID Designs) from your book.
Specifically, out of all the S-O-L-I-D design rules, the “O” rule (Open-Closed Principle) seems critically important for TDD, as well as the iterative and incremental development in general. If the system we design is “open for extension but closed for modification”, we can keep extending it without much re-work and re-testing of the previously developed and tested code. On the other hand, if the design requires constant re-visiting of what’s already been done and tested, we have to re-do both the code and the tests and essentially the whole iterative, TDD-based approach collapses. Please note that I don’t even mean here extensibility for the future versions of the system. I mean small, incremental extensions that we keep piling up every day to build the system in the first place.
So, here is my problem: RTOS-based designs are generally lousy when it comes to the Open-Closed Principle. The fundamental reason is that RTOS-based designs use blocking for everything, from waiting on a semaphore to timed delays. Blocked tasks are unresponsive for the duration of the blocking and the whole intervening code is designed to handle this one event on which the task was waiting. For example, if a task blocks and waits for a button press, the code that follows the blocking call handles the button. So now, it is hard to add a new event to this task, such as reception of a byte from a UART, because of the timing (waiting on user input is too long and unpredictable) and because of the whole intervening code structure. In practice, people keep adding new tasks that can wait and block on new events, but this often violates the “S” rule (Single Responsibility Principle). Often, the added tasks have the same responsibility as the old tasks and have high degree of coupling (cohesion) with them. This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.
Compare this with the event-driven approach, in which the system processes events quickly without ever blocking. Extending such systems with new events is trivial and typically does not require re-doing existing event handlers. Therefore such designs realize the Open-Closed Principle very naturally. You can also much more easily achieve the Single Responsibility Principle, because you can easily group related events in one cohesive design unit. This design unit (an active object) becomes also natural unit for TDD.
So, it seems to me that TDD should naturally favor event-driven approaches, such as active objects (actors), over traditional blocking RTOS.
I’m really curious about your thoughts about this, as it seems to me quite fundamental to the success of TDD. I’m looking forward to an interesting discussion.
I am not sure what you mean by me describing a behavioral pattern. Maybe you mean interaction test.
I am illustrating a real development problem I have recently seen in a client’s production code. This is different than a mock. Here in these three articles I am showing how to unit test code that is waiting on some asynchronous thing to complete. There are many ways to unit test. I like this as a way to trick the code under test to think that its async event has completed.
Hi Miro
The constant revisiting might be a sign of a design problem. It’s hard to tell without example code.
The Open-closed principle is really the outcome of duplication reduction. When we find there is duplicated conditional logic, maybe in the form of switch/case statements or if/else chains, OCP is the medicine. Blocking everywhere is a design smell and dangerous, especially if locking and blocking are intermixed, but it is not necessarily a call for OCP.
I like the event driven approach. Parts of the system are concerned with the detection of events, other parts are concerned with the reaction to the events. It’s a matter of separation of concerns, single responsibility, and duplication removal.
As you suggest, TDD is very friendly in the event driven approach. Using something like an active object can greatly simplify designs. You are suggesting careful attention to separation of concerns. Where event detection is separated from event reactions. A door sensor should not know to sound an alarm and call the police. It should just report the door open event so the system can respond with one or more reactions.
I am assuming you are referring to the Doug Schmidt form of active object. The concurrency mechanisms can be hidden inside a generic active object, giving a tested library module. Application specific actions are dropped into the the AO when their event is triggered. So now we can easily test the event detector, the active object and the event reaction.
My point in this article, is to show how you can test, off-target, code that interacts with concurrency mechanisms. You will need some. I share your concern that too much custom concurrency logic is usually sprinkled through embedded applications.
James
Pingback: Designing and testing embedded systems | jonaslinde.se