Sun, 02 Jan 2000 21:16:55 -0800
I'm just now responding to the issue of capages that was raised last June.
First let me restate the problem that capages were intended to solve, as I
understand it. Some applications want quick access to a large number of
capabilities. The supernode (a tree of nodes) is not very quick because the fanout
of the tree is small (32, or 16 in KeyKOS) and each level traversed requires a
I believe we can solve this problem by supporting "big nodes". It is most
convenient for the kernel if the representation of a big node is the same size as
the data page. But, all the user knows is that the big node holds N capabilities (N
= page size/capability size). The user addresses capabilities in a big node as in a
small node, by consecutive integers.
A big node looks a lot like a capage, except it cannot be used in the memory space.
If we avoid mixing capabilities and data in the same space, we can avoid the
necessity of the user knowing how big the capability representation is.
The size of the capability representation isn't entirely hidden, because it affects
the number of capabilities in a big node. But this is a less serious issue, because
when data and capabilities aren't mixed, it is possible to build an object (the
"big supernode", or tree of big nodes) that hides from its clients both the size of
a capability and the number of capabilities in a big node. The situation is similar
to that with data pages, where the user of a segment isn't normally concerned with
Let's see if big supernodes can satisfy the performance requirement. Without having
looked at the code, I'd guess that a load or store from a capage in a memory space
involves a page fault, in which the kernel traverses the memory tree to a leaf.
Finding a capage, it performs the load or store.
I will assume that the entry and exit to the kernel for a page fault isn't be
significantly faster or slower than that for a key invocation.
Traversing the entire tree in the kernel is faster than using a kernel call for
each level. We could specify an operation that traverses a tree and does a
capability load or store in one step. There are some architectural issues here, but
I think they are solvable.
There are extra operations to enter and exit the supernode process. These could be
eliminated by implementing the supernode as a library instead of a separate object.
This will work in all the same cases in which the capage works.
The traversal of the memory tree may have optimizations; the big supernode can have
all the same optimizations. The depth of the big supernode would actually be less,
in some cases, than that of the memory tree, because it would be using big nodes,
and the memory tree uses small nodes.
So, I think big nodes can meet the performance requirement, without requiring all
clients to take into account the size of the representation of a capability, as the