[cap-talk] rights communication - hope? - tome
jed at nersc.gov
Wed Sep 22 22:46:53 EDT 2004
At 06:15 PM 9/22/2004, David Wagner wrote:
>Jed Donnelley writes:
> >Heh. Well, I would have to say that I qualify as one of those "old-timers".
>My apologies! Well, I will say this: As someone who was not even born
>when much of this work was done, I'm very keen to learn from those who
>have had a first-hand view of this work. I'm grateful that you're willing
>to take the time to share your experiences and thoughts in this area.
I'm happy to share my thoughts of course as time allows.
> >Of course from the viewpoint of a programmer developing a service one
> >had to think a bit about the service being offered and what rights
> >were needed when a request was made. [...]
> > From the perspective of the application programmer nearly
> >all the capability management is hidden in libraries so the
> >API looks much like that of a traditional OS. [...]
> >Except for the basic design this is all quite trivial stuff. [...]
>Here are a few of the things I'm worried about.
>(Maybe "worried" is too strong; issues that make me think the
>costs of capability-style programming in the large may not be
>fully known yet.)
>1) Object capabilities are a way of making it easy to build trust
>perimeters. But you've still got to know where the perimeters should
>go, and more generally, how to decompose your application into separate
>pieces to maximize security. The problem of "decomposition for security"
>is one that is known to be highly non-trivial, no matter what enforcement
>mechanism you use for enforcing protection (the requirement that one
>piece should not be able to subvert other pieces). I'm not sure whether
>capabilities really make the question of how to decompose any easier;
>they just make it possible (or easier) to build your app out of smaller
>pieces, once you know how to divide it into pieces.
From my perspective the most important boundaries are natural and
easy to deal with. Specifically when any process makes a "system
call" you cross a boundary and send over any needed capabilities.
Whenever an application is started it is started in a process that is
given an initial set of capabilities (not run as a "user" with all the
inherited ambient authority of a person).
>2) Capability discipline places some restrictions on how you program.
>For instance, the Java libraries violate capability discipline in several
>places. If we re-wrote those interfaces to follow capability discipline,
>would those libraries become more inconvenient to use?
>3) Capability discipline requires passing every "privilege" as a separate
>argument. They should not be bundled, they should not be in global
>variables, and so on. Does this result in a proliferation of methods
>that take umpteen arguments? Does this result in a proliferation of
>MakerMakerMakerMakers, if you see what I mean?
I think perhaps we are talking about somewhat different things. I
know there has been discussion (even on this list) about what amounts
to object (capabilities they may be) protection barriers within a
shared memory process. I am not an advocate of such mechanisms
to protect parts of a process from itself. Any tradeoffs here are much
like the traditional debates about language protected memory references
or "goto" sorts of structures (really showing my age there I guess ;-).
From my perspective any such internal protections fall into another
category. I will let others argue for or against such internal protections
if they wish.
What I am referring to when I argue that:
The capability model has been around since the late 1960s and has been
shown over the years to quite clearly lead to "better security for the
designed this way". I don't think there is even really much debate amongst
those who have studied the matter.
is the capability mechanism for communicating rights between processes.
I can define what I mean by a process more carefully if it becomes important,
but it is essentially what Unix or Windows refer to as a process. Namely
a shared memory space with registers, etc. and some rights state (perhaps
its userid or a c-list) that executes code and is the active entity in
>4) I'd still like to see empirical experience with some large systems
>that I can relate to. For instance, what's it like to write a web
>browser in capability style?
Ha. Funny you speak of "large systems" and then refer to a Web browser.
I tend to think of "large systems" as the supercomputers that I have worked
on for most of my career.
To address your question - I don't think programming a Web browser
in a capability system would be much different than in an inherited
authority system. The main difference (I hope my colleagues agree
with me on this much) is that when the Web browser was initiated
it would have fewer rights. It would have some sort of right to
communicate on the network, a right to write to a graphic display
(through whatever sort of windowing interface - this an area that
wasn't developed as it is today when I was doing such programming,
it would be fun to set up such a system), and the right to accept
keyboard/mouse input in some form. Beyond that it would likely
have to ask for any rights to things like files. Now it makes system
calls to access them directly. In my view it would do something
similar but a window would pop up that would ask the user to
authorize any file access.
I admit I'm being a bit rough here because I haven't programmed such
a system, but I think the basic outline is pretty clear. There are others
on this list who have worked extensively on the Web browser issue.
Perhaps they could share their thoughts. I think the most substantive
issue to deal with is how "helper" code gets invoked (e.g. something to
display PDF or whatever).
>How does this experience compare to the
>way we write a web browser today? How much more security do you get?
>How security-conscious does one need to be to program in the capability
>style? Many applications today are inherently complex, in the sense
>that even their requirements and specification are complex.
If by "capability style" you mean what I am referring to - namely protection
and explicit communication of rights across process boundaries - then
I don't think programming in the "capability style" is substantively different
than what you have always done. If you mean applying internal protections
within a process then I can't really answer your question. I would say
the cost/benefit tradeoffs in that area should be the subject of another
discussion. I certainly think it would be unwise to confuse the two
applications (inter process vs. intra process) of object (capability)
architecture. Perhaps we need some help with naming here?
>Here's my criteria: If someone claims to me that one of the above issues
>is going to be a killer problem for capabilities, I'd like to be able to
>point them to empirical evidence to the contrary. I'm pretty enthusiastic
>about capabilities, but I don't feel like I have a list that would satisfy
>myself in this respect yet, let alone a diehard capabilities skeptic.
From the direction of this discussion I might find myself on the side
of being a capability skeptic - at least if such a skeptic is concerned
about the cost/benefit tradeoff of using object/capabilities for module
protection within a shared memory process, e.g. as enforced with
In any case please have that debate separately. What I would like
to hear is any discussion about costs of or alternatives to the capability
model for communicating rights between processes.
> >Even with some other bases for protecting the
> >"sandbox" (e.g. a separate user, chroot, or the like) a sandbox is only
> >a mechanism to create a new domain, not to effectively communicate
> >rights into and out of that domain. This is a property that is common to:
>Yes! The first step is to be able to provide isolation between domains.
>The next step is to build a mechanism for controlled sharing between those
>domains. The third step is some way (e.g., policies) to tell which forms
>of sharing are safe and should be allowed.
We agree there - though I might start glazing when you get to your third step.
Regarding step 1, I maintain that the clearly most direct and simple place
to make such a domain separation is as what all OSs that I know of call
a process. Within a process memory is shared making it difficult to
put any meaningful separations in place. I argue don't bother, though I
know others may differ. Still, let us agree that processes should be
protected from one another and have their own rights. I believe all
systems pretty much follow this paradigm.
Now I argue (going on to step 2) that the right (best, shown most effective,
etc.) way to communicate rights between such domains (whether active domains
or on initialization) is to allow them to communicate rights to objects
This also forms a natural way to make "system calls" take the form of
communication between mutually suspicious processes (as I claim they
should be as they may happen across a network).
>As you've correctly noticed, a lot of research has focused primarily on
>the first step, or first two steps. You have to learn to walk before you
>can run. Likewise, isolation is a minimal first requirement; if you can
>enforce isolation, that gives you something you can build on, but until
>you've got that, you've got nothing.
Yes. Even with just that you pretty much have nothing until you
can communicate rights between such domains. A program that
can run in its memory space but not do any I/O is pretty much useless.
>It's also worth noting that you can go a surprisingly long way by
>providing restricted execution environments (sandboxes, VMs, whatever)
>and focusing almost entirely on isolation, with very little sharing
However, this is where the true hackery (a pejorative term to be sure)
of such approaches becomes clear. The base OS on which such
"restricted execution environment"s run already has what should be
a perfectly good restricted execution environment - namely that of
a user process. Why does one have to put in all sorts of new
mechanisms? A virtual machine is TRUELY extreme for this need.
A "sandbox" seems sort of a reasonable effort to make up for
a failed OS. Typically what sandboxes lack is step 2, the effective rights
>There are an awful lot of apps out there that require very
>little access to shared resources, and thus can be usefully confined
>even if you don't have sophisticated mechanisms for sharing. Of course,
>I'll also freely admit there are also many apps where this is not
Sure. You can chroot them all and let them ask for what they need.
However, then you need to be able to give them what they ask for.
How do you do that? I believe you may as well use capabilities (i.e.
an object model) even if you are building it from scratch on top of
a sandbox mechanism build over a failed OS [by that I mean one that
doesn't solve this most fundamental problem that OSs were designed
for to begin with, e.g. see:
and give it a bit of leeway for age (1979)].
> >You also refer to work on "privilege separation of applications". I'm
> afraid I
> >don't know what you are referring to there.
>This is a term some have started to use for the process of decomposing
>a large application down into smaller domains, to better follow the
>principle of least privilege and to reduce the size of the TCB.
>Old idea, new name. One standard example is Niels Provos' work on
>privilege separation (http://www.citi.umich.edu/u/provos/ssh/privsep.html).
>Dan Bernstein's qmail is another good example of a thoughtful attempt
>at architecting a mail service with security in mind. There are others.
This is an area where my opinion isn't nearly as strong as my belief
in the capability/object model for communicating rights between
processes. However, I'll just state my opinion that I see this as
a tradeoff. As long as you remember that you are working at the
margin to protect parts of a service from other parts (generally)
then you can pretty easily tradeoff the costs (more initialization,
process overhead, etc.) vs. the benefits (bugs easier to find,
etc.). I don't consider it worthwhile putting a lot of effort into this
area until the more fundamental issues are dealt with. E.g. in
NLTSS at one time we decided to merge the protection boundaries
between a number of separate parts of our "OS" - the various internal
servers that it was made up of - into what amounts to a single shared
memory domain. They were still coded separately with separate
threads, used no memory sharing, etc., but they were put into a
single shared memory domain to cut down on the costs of the
exchange mechanism. A tradeoff.
> >What I was referring to was systems where each process has its
> >own ACL so that rights can be explicitly managed on a per process
>Yes, Systrace, SELinux, and Janus all have that feature. In each case,
>there is some policy, specific to the app being executed, that says what
>that app can do. Part of that policy is effectively an ACL that says
>things like "this app can access file F".
>Now maybe you're wondered where that ACL is stored -- but that is an
>unimportant implementation detail. SELinux stores that ACL entry with
>the metadata for file F (I think), while Systrace and Janus store it
>in a separate policy file (typically, one policy file per application).
>But where it is stored makes very little difference.
>Whether they are syscall based or not is also irrelevant. That's just
>a question of how the policy is enforced. I don't see why this should
>As you say, in many of these examples, the policies are fairly static,
>they don't give much (if anything) in the way to communicate rights
>across boundaries, and they're much more focused on isolation than on
>controlled sharing. Yes, absolutely. I'm not going to argue about that.
>As I mentioned, these tools are far from perfect, and they're not going
>to fully satisfy this crowd. They add real value, but they also have
Perhaps this could be a separate discussion. If the above systems can
implement a full access matrix where the subject (at least) in the matrix
is a process then they are functionally equivalent to capability systems,
but with the sense of the matrix inverted. However, by "functionally" here
I just mean in terms of the access rights that can be expressed. I believe
that access list systems for rights communication are so horrific in terms
of practical programming as to be almost unusable for most purposes.
In this case I don't have a lot of experience to speak from (though I do
have some), so you can take that with a grain of salt. Still, it is something
that I do feel strongly about. We can take that topic up separately if you
Incidentally, I see you're at UCB David. I work at LBL's NERSC center
in Oakland and live in Berkeley:
so it wouldn't take all that much for us to get together and chat if that
might interest you. There is some value to such an email exchange in
having a record, involving others, etc., but I think there might also be
some benefit in getting together with a white board and coming to
an understanding of common ground and differences more quickly.
Of course scheduling such face-face meetings can be a bit tricky and
time consuming, but we could take that off-line and give it a try if you like.
More information about the cap-talk