Jonathan S. Shapiro
Wed, 4 Feb 1998 12:19:15 -0500
In looking at TCP/IP implementation a year or so ago, one thing that
became painfully apparent was the need for timers. Both TCP and IP
have functions that are called by kernel periodic timers. These
routines entirely preempt the normal behavior of the TCP/IP
If this were all, life would be simple, but there are brief windows
during which TCP/IP raises it's priority to prevent the timer routines
from being called at bad moments.
The best mechanism we were able to come up with for doing such a thing
under EROS used three domains:
1. A timer domain that bounced between calling sleep() and setting
an fault code in the domain that was to be interrupted -- EROS
designates a fault code meaning "requested halt".
2. A TCP/IP domain that embodied the protocol implementation.
3. A TCP/IP keeper that ran with the same address space as the
TCP/IP domain which would run the timer code.
The priority raise/lower was accomplished by setting a mutex bit. If
the timer goes off and the keeper detects that the mutex bit is set
the keeper in turn sets a bit saying "run the timer code when you are
done". Whenever the priority is lowered this bit is checked by the
TCP/IP domain and the timer code is run voluntarily.
I am beginning to wonder if this sort of out-of-band event
interruption facility is not a generally desirable thing to have.
Granted that the UNIX signal mechanism was ill-conceived, a stackable
event model with process-boundable depth seems quite a reasonable
thing to have.
There are a variety of possible designs. The two most obvious are
fixed-length event messages and preempting message receipt.
Why did KeyKOS eschew this do vehemently?
By preempting message receipt, I mean a mechanism by which a process
if a message comes in while I am in the running state, here is
where to put it. Save the old state on the stack and place me in a
nested running state which is NOT preemptable until I say