Inside the real time kernel

Template
  1. The operating system

Program flow

The program flow of the system will follow this scheme: the "main" function of the program will call at least once the kernel function that starts a task. Then it will call the scheduler (function "schedule"). This will in turn call the function "dispatch" (provided by the programmer), with the number of the task to start and its "virtual" position (see below), which at this point is 0. "dispatch" will then call the appropriate task which will be implemented as a function (provided by the programmer, of course). This function will execute some code and, eventually, will have to make a system call that will be intercepted by the scheduler (remember that the tasks have to explicitly call the scheduler for full portability implies a cooperative design). The scheduler will call a kernel internal function to implement the system call and then it will determine which active task has the highest priority (and so has to be (re)started).


At this point the reader might wonder how the scheduler would be able to make a context switch, if it has to, for it has no access to the system stack.


The solution is extremely simple and elegant: the tasks make the system "calls" by means of a C "return" instruction. The "return" instruction cleans the stack, making a context switch (by the scheduler) trivial. Strictly speaking, the task ends itself. From a more abstract point on view, it puts itself in an idle mode. Indeed, while returning, the task passes the scheduler the address of a exchange structure. In this structure, the task puts its "virtual" position and a code identifying the system call it wants to perform. The scheduler retrieves those precious informations, allowing itself to effectively perform the system call, and later to restart the task (via a "call" instruction), passing the task its current "virtual" position. The task will then be allowed to resume its execution at the point it had left when it made its last "system call".

The tasks

Here is an example of a very simple task. When it is first started, it receives (from the scheduler) a virtual position (in “pos”). It then executes some code and then “calls” the system via “SCHEDULE(taskParm, 1)”, which is a macro hiding a “return” instruction. “taskParm” is the name of the exchange structure, which will contain the code identifying the system call and the current virtual position of the task, “1” in this case. When the task will be restarted, it will resume at the virtual position “1”.


taskParmStruc taskParm;


taskParmStruc *task(unsigned char pos)

{

switch(pos)

{

case 0:

...

SCHEDULE(taskParm, 1);


case 1:

...

SCHEDULE(taskParm, 2);


...

case n:

EXIT_TASK(taskParm, 0, 6);

/* 0 is the number of the task */

/* 6 is a virtual dummy restart position */


Here you can see what the SCHEDULE(p, n) macro actually is:


#define SCHEDULE(p, n) \

p.pos = n; \

p.sysCallCode = RESCHEDULE; \

return &p


The dispatch function


The dispatch function is supplied by the programmer. It is a very simple function which is necessary to make a link between a task number and the corresponding C function. Here is an example for this function. We assume that we have two tasks represented by two C functions, task1() and task2().


taskParmStruc *dispatch(unsigned char task, unsigned char pos)

{

switch(task)

{

case 1:

return task1(pos);

case 2:

return task2(pos);

default:

/* reachable only if kernel bug... */

}

}


The scheduler


The scheduler is implemented as a C function. It is made of an infinite loop. It determines which active task has the highest priority. Then it calls the dispatch function which makes a link between a task number and the corresponding C function. As explained above, the scheduler passes the task number and the "virtual" position of the task to dispatch. Then, dispatch calls the right function, passing it its "virtual" position. When the task will return (performing a system call), the scheduler will check the system call code and call the requested function. It then restarts the loop.


Here is the algorithm for the scheduler:


void schedule() {

/* number of current task: */

unsigned char task;

/* System call code the current task wants to perform: */

unsigned char sysCallCode;

/* pointer to the exchange structure: */

taskParmStruc taskParmPtr;


for(;;) {


/* finds which task has the highest priority */

if ((task = higherTask(runningTasks)) == MAX_TASK) continue;


/* (re)starts the task: */

taskParmPtr = dispatch(task, taskBuf[task].pos);


taskBuf[task].pos = taskParmPtr->pos;

sysCallCode = taskParmPtr->sysCallCode;


switch(sysCallCode) {

case RESCHEDULE:

break;


case TASK_START:

di();

startTask(taskParmPtr->task);

ei();

break;

... /* code to handle the other possible system calls */

}

}


The following functions are provided by the scheduler:


SCHEDULE(p, n) : tells the scheduler to reschedule. “n” is the virtual position the task wants to get back to when it will resume its execution. “p” is the name of the exchange structure.


EXIT_TASK(p, t, n): tells the kernel to kill task “t”. “n” and “p” have the same meaning as above. If “t” is the number of the task performing the call, “n” is ignored.


START_TASK(p, t, n): tells the kernel to start task “t”. “n” and “p” have the same meaning as above.


Semaphores


Four functions are provided:


WAIT(p, s, n) blocks the task if the value of the tested semaphore “s” is zero. This value is decremented otherwise. “n” and “p” have the same meaning as in SCHEDULE(p, n) .


SIGNAL(p, s, n) restarts the task with the highest priority which is waiting on the tested semaphore “s”. If there is no such task, the semaphore value is incremented. “n” and “p” have the same meaning as in SCHEDULE(p, n) .


GET_SEM(p, s, n) returns the value of the specified semaphore “s”. “n” and “p” have the same meaning as in SCHEDULE(p, n) .


SET_SEM(p, s, v, n) sets the value of the specified semaphore “s” to “v”. “n” and “p” have the same meaning as in SCHEDULE(p, n) .


Interrupts


Two important points have to be remembered. First, the system calls made from the tasks are protected against interrupts in the kernel itself. Therefore the tasks do not have to disable/enable interrupts around system calls, it is done in the kernel. Second, it is possible to perform some system calls in the interrupt routines. Of course, the method to do this differs completely from the one we use in tasks. Remember system calls are actually performed by a “return” instruction in the tasks. In interrupt routines, a simple C call to the appropriate function will be used instead. The system calls then come in two versions. The macros have to be used in tasks and the C functions have to be used in interrupt routines and in normal C functions (like main(), for example). See the file “kernel.h”, which is provided with the package, for details.


Priority inversion


An important problem in real time systems is called the priority inversion. This occurs when a task H with a high priority has to wait for a task L with a low priority. L has to perform a signal operation on a semaphore to free H which is blocked in a wait operation on this semaphore. If then, a task M, with a priority p(M) such that p(L) < p(M) < p(H), is (re)started, we say that there is a priority inversion between M and H.


This problem can be avoided by a modification of the dispatch function. Indeed, this function only makes the link between a task number and the task code. It is therefore possible to rewrite this function such that the priorities will be dynamically modified. Of course, extreme care should be taken while doing this.

  1. CONCLUSION

The kernel we just presented is extremely modular: the scheduler and the semaphores handling functions are put in two different modules. It is also very small: for a PIC, using the HITEC-C compiler, its size is only 500 bytes (roughly). If the semaphores are not needed, it can even be reduced. To customize the needed amount of RAM positions, you can choose between a maximum of eight or sixty-four tasks. In the first case, each semaphore needs one byte. The kernel itself needs eight bytes to store the current positions of the tasks plus one byte to store the states of the tasks (executable or not). In the second, each semaphore needs nine bytes then. The kernel will need sixty-four bytes to store the positions of the tasks and nine bytes to for their states.


Complex features like FIFO handling functions, timing functions,... have not been implemented yet. It is not an easy task to keep the system modular while adding such functions. For now, the programmer is on her own to implement those features.


In the future, more functions will be added, provided they would not alter the modularity and portability of the kernel. Another way to explore is how to help the programmer to verify if her application fulfills the specifications.