[cap-talk] A Taxonomy of Current Object-Cap Systems

Mark Seaborn mrs at mythic-beasts.com
Sat Mar 7 13:37:42 EST 2009

Toby Murray <toby.murray at comlab.ox.ac.uk> wrote:

> Suppose we have a single process running in a plash sandbox. That
> process has its own object-capability server that it communicates with
> to do anything in the system (open files, fork other processes etc.)
> 1. If it forks a new process, does the new one share the same server?


> 2. Do Plash servers ever communicate with each other?

Yes, they can do.  If you use pola-run inside a sandbox, it will
create a new (sandboxed) server process, to serve a process running in
a new sandbox (i.e. a newly-allocated UID).

So you might have three processes:

 A) server process 1 (instance of pola-run) - unsandboxed, normal user's UID
 B) server process 2 (instance of pola-run) - sandboxed, UID X
 C) client process, sandboxed, UID Y

Plash's protocol is a point-to-point protocol and connections must be
set up explicitly.  Suppose the processes have connections set up
between them as follows:

   / \
  B   C

If A invokes (an object defined by) B, passing a reference to (an
object defined by) C, it does not automatically create a connection
between B and C.  C gets A's proxy object, which will forward messages
to B.

An alternative topology is


This is not so good because if C creates further nested sandboxes, it
can result in a longer chain of objects forwarded across connections.
The first topology, where A acts as a hub, avoids that problem.

In the example, B is the creator of C, so it is up to B how C's
connection is set up.  B can either use A's make_connection object (if
A has provided one), or it can use its own make_connection object.

> In this case, the server is akin to an OS kernel and the chroot'd
> processes that communicate with it akin to userspace processes, where
> the system-call interface is a socket rather than an IRQ.
> I'd then call Plash a virtual object-capability operating system, where
> an OS instance comprises a server and the sandboxed processes it
> manages.
> Is that fair?

Yes, that's fair.  But one might want to consider the protocol
separately from the OS instances that use it.  You could imagine two
machines communicating across a private wire, sharing objects using
the protocol.  But maybe this doesn't meet your criterion of being an
actually existing system.

> > > Recursive Reentrancy - whether the system automatically allows objects
> > > to be recursively invoked, e.g. E does. EROS does not.
> > 
> > Plash: yes.
> Does this occur when I fork+exec a program binary P and it then
> recursively execs itself so that both the original instance and the
> recursively exec'd instance share the same namespace? If so, I agree
> this is totally recursive reentrancy.
> Plash would presumably also exhibit concurrent reentrancy. Suppose two
> processes each have a program binary P in their namespace and each fork
> +exec P, using the same namespace for P in both cases. Then both
> insstances of P share the same namespace and we have concurrent
> reentrancy.
> Does that sound fair?

I wasn't thinking about Unix executables.  exec'ing an executable is
not usually an object invocation from the point of view of the Plash
protocol.  (The exception is that there *is* a special hook for
turning exec into an object invocation, but this facility has not been
developed very far.)  The executable is just loaded into memory.  In
this case executables are not protected objects because they don't
have private state.

> > What do you mean by async send without async receive or vice-versa?
> An async senc without async receive might look like:
> asyncSend(cap,msg,replyCap);  // done async (e.g. in another thread)
> waitForRecv(replyCap,&msg);   // block waiting for the reply

I would call that selective receive rather than synchronous receive.

EROS/KeyKOS/CapROS have this via resume keys.  Coyotos generalises
this notion and has "closed waits" vs. "open waits" [1].

This concept is closely related to recursive re-entrancy: if process
can wait for a reply to a specific invocation that it has made,
it can block recursive calls.

Unix has selective receive via passing specific FDs to select() or
poll().  But Plash cannot do selective receive on individual objects
exported across a connection.

This means that in Plash, when we do
we go into a recursive invocation of the poll() event loop which
listens on all active connections, and loops until we receive an
invocation on replyCap.  But if we receive an invocation of another
object, we handle that too.  This could result in further recursive
invocations of the event loop.

So far this has not been a problem because the objects implemented in
Plash are relatively simple.  If it became a problem, we could queue
up invocations to non-trivial objects, and process these invocations
only from the top-level event loop.  replyCap, however, is a trivial
object, because it just stores the reply message, to be picked up
later by waitForRecv().

How would your taxonomy classify Plash if different processes adopted
different approaches to recursive re-entrancy?

I believe Erlang has selective receive via pattern matching on the
message queue, but I don't think it has any special semantic
significance because the state of the message queue is not visible to
other processes.

To me, the term "asynchronous receive" suggests mechanisms like Unix
signals, where the execution of the process is interrupted, or
(assuming I understand it correctly) Coyotos's "notifications" system
[1], which doesn't interrupt the execution of a process but is more
akin to poll()ing file descriptors with a timeout of zero.

> An async receive without async send seems a bit strange and I'm not
> sure that it exists anywhere, although it can be easily modelled.

How would you classify pi calculus with its rendezvous operation?
It's not an object-capability language because the send and receive
facets are not separated, but I imagine that pi calculus could be
changed to separate them.


[1] http://www.coyotos.org/docs/ukernel/spec.html

More information about the cap-talk mailing list