[cap-talk] kernel object knowledge
capability at webstart.com
Sun Jun 10 18:57:08 EDT 2007
At 07:52 PM 6/9/2007, Peter Amstutz wrote:
>It certainly seems that capabilities and microkernel/message passing
>operating systems should go hand in hand. However, I believe the
>traditional criticism of microkernels is not that they don't work, but
>it is hard to make them work efficiently.
We had a fairly lengthy discussion about the main performance
issue with microkernels (domain exchanges) in January. It's a
little difficult for me to find where that discussion really
but I think it got rolling here:
as the thread "Object-capability vs. monolithic performance"
which could as well have been "microkernel vs. monolithic
performance" I believe.
There is no doubt that if one compares the performance of a
monolithic kernel system call for some function (e.g. a
"read" on a file) with a microkernel implementation of the
same function (e.g. going through sever domain changes
of non kernel code) then there will be additional overhead
in the microkernel implementation. In an apples to apples
comparison I don't see how the microkernel implementation
can compete on performance. There is a fundamental trade-off
between performance and integrity/modularity/reliability
if one achieves the additional integrity/modularity/reliability
by using additional domain changes.
However, as I noted in that thread, one can cheat:
Jonathan S. Shapiro shap at eros-os.com
Thu Jan 4 08:29:19 CST 2007
>Ah. So you cheated. :-)
To me such "cheating" seems perfectly reasonable.
By doing so one can keep the interfaces the same,
allow for the additional integrity/modularity/reliability
if so selected, but still achieve comparable performance
in a situation where the additional reliability is
likely not needed (if you skip the example, please
see the note about language enforced object interfaces
in the "kernel" in the last paragraph).
Here I'll describe in detail how we achieved that
additional performance in our NLTSS implementation
by using the example of the system a read operation
on a file object. From the viewpoint of the program
executing the "read" operation on a file object there
were three messaging operations required:
A. The send of the read request on the file capability
(nominally to the file server)
B. The receive of the reply from the read request, and
C. A receive for the data read.
This triad isn't so bad in itself in that the one NLTSS
systems call allowed any number of sends and receives
to be linked together and submitted as one system
call. In some cases this actually made such a system
call architecture more efficient (because multiple
system calls - e.g. from multiple threads - can be
bundled together by a threading library).
The potential difficulty still remains of the domain
transitions processing a request. In the case of the
NLTSS system the file server was initially a user
process. To process such a request then the system
1. Transit to the kernel to process the initial
request that was transmitted to the file server,
2. Transit to the file system to process the
3. Transit to the kernel (disk driver) to do
and then reverse
4. Transit back to the file server to send the
control result and the data back to the requesting
5. Transit to the kernel to process the above
messages, and finally,
6. Transit back to the requesting process to
receive the results.
The above is 3x the number of domain transitions
that are needed in a more traditionally monolithic
kernel architecture. In our case (on a Cray
architecture) the cost was almost 3x in lost
What we did was to move the file server into
the kernel of the system. The code stayed the
same (still processing the same message calls
but as threads in the kernel), but the performance
improved nearly 3x. Of course the protection in
the system is weakened. If the file server was
to reference the wrong part of memory it could
crash the system - vs. in the pure microkernel
version of the system 'just' crash the file server.
In practice if the file server segment faulted the
system would not run long even if the file server
was in a separate process.
The move of some of the microkernel processes into
the system kernel as threads worked well for us.
We still had our microkernel architecture in place.
Our system was still a network system (e.g. file
servers on other processors were still available
through the same message passing), and the reliability
and performance were as good as the competing
monolithic kernel system. We could easily
put back domain boundaries any place that the
exchange overhead was justified.
As I recall Alan Karp shared a similar experience.
To me this seems like the perfect situation for
"cheating". Why beat your head against the cost
of the domain changes (which may be largely
determined by the system hardware - as with our
Cray hardware) when you can build a system that
can be tuned either for performance or reliability
by essentially just throwing a switch? Those
who want the performance of a monolithic kernel
can have it (perhaps these days with more ocap
language 'domain' changes inside the "kernel")
with a design that is flexible enough to adapt
for increased reliability/integrity as more
object/capability domain changes can be deployed.
More information about the cap-talk