[cap-talk] Re: "capabilities" as data vs. as descriptors - OS
security discussion, restricted access processes, etc.
Jonathan S. Shapiro
shap at eros-os.org
Thu Apr 29 22:21:52 EDT 2004
On Tue, 2004-04-27 at 13:56, Jed Donnelley wrote:
> > > "Harrison, Ruzzo and Ullman proved in 1976 that access list systems cannot
> > > prevent a program from disclosing information to anyone in the system...
> > > I wonder what is going on. Why is an access list based system any
> > > different from a capability based system (or a system like Unix or
> > Windows)
> > > in that regard?
> >Briefly: because the rules that govern updates to the access graphs are
> >not the same, and the rules in ACL systems don't work right.
> Are we again talking about programs that are given a right (via an ACL)
> being restricted from making that right available to other processes that
> they are able to communicate with?
The reason that the safety problem is decidable in the capability case
is that the transmission of authority is governed by authority. That is,
in order for A to send some capability X to B, A must already be in
possession of a send capability to B. This is the result from the
Jones, Lipton, Snyder work on capability safety.
The reason that the safety problem is decidable and unsafe in ACL
systems is that no such restriction on transmission exists. In an ACL
system, A can transmit an authority to B **whether or not A holds an
authorized communication channel to B**.
> However, perhaps we can agree that it is
> important to at least have the ability to limit the rights that a program has
> more tightly that to just give it all the rights that a human user has.
We can certainly agree on this.
> I believe that the most significant difficulty is not in the underlying
> mechanisms to restrict access (though I believe the capability
> mechanism has huge advantages in that regard), but more in the
Jed: you seem to be talking in conflict with yourself. I agree that the
interfaces are key, but I think that you have dismissed without
appearing to consider my comments about supporting programmers.
One key reason that getting the interfaces right is hard is that the
designation of authority does not align with the incentives of the
stakeholders. To say that in plain English:
In capability systems, we observe that the *natural* thing for the
programmer to do usually turns out to be the *right* thing to do.
In other protection systems that I am familiar with, doing the
*right* thing is almost always the *unnatural* thing. It takes
So when I talk about the value of descriptors as a means of enforcing
programmer mindset, I'm talking about something that is exceedingly
important in practical terms -- and something that empirically doesn't
happen reliably in "capabilities as data" systems.
I think that you and I agree on our two main points:
1. We want to make it possible and natural for developers to get
security "right" more often. Least privilege is a good starting
2. We probably would agree that for most general-purpose applications,
memory safe programming languages (Java, C#) are preferable to
traditional systems languages like C.
Or to say this another way:
1. Take steps not to have more authority than you need.
2. Take steps that reduce the likelihood that your program will
get compromised, regardless of how much authority it holds.
The key point on which we disagree is the practical feasibility of doing
this in any current popular system API.
Can we provide some approximation to least privilege in a UNIX-like API?
Yes, I believe that we can. I cannot do it within the constraints of
POSIX, but I think I can do it within a sufficiently "POSIX like"
environment that the result would be acceptable to current UNIX
programmers and would have acceptable real porting costs (note that both
of these metrics -- which are the only important metrics for developer
acceptance in the real world -- are inherently subjective).
But if I have to live within the current POSIX or Windows API, forget
> I believe, therefore, that anybody working on improving the security
> of a system like Unix by adding mechanisms like access lists or
> capabilities is deluding themselves if they think they are going to
> accomplish anything substantive with that work without addressing
> what I see as the more important interface issues.
Perhaps, but anybody who isn't looking at introducing a fundamental
protection mechanisms that can be aligned with the interface designs
they need to create is equally delusional.
> >However, proofs that a model *cannot* be secure are another matter. A
> >broken implementation of a broken model can clearly make things worse,
> >but it is very useful to know that a perfect implementation still won't
> >work. This is the variety of proof considered in the HRU paper.
> I find it difficult to imagine how it could be proven that all Access List
> systems (in fact almost any access list system) are so insecure as to
> not be able to allow a process with an access right (limited by ACL) to
> choose to either share the right or not share the right with other
> processes on the system. If that is in fact what you believe the:
> author="Michael A. Harrison and Walter L. Ruzzo and Jeffrey
> D. Ullman",
> title="Protection in Operating Systems",
> journal="Communications of the ACM",
> paper to show, then perhaps it's worth my time to look it up in the library
> and go through it. I strongly suspect there is some misunderstanding about
> assumptions here, but if that's the only way to get past them it might be
> an amusing exercise.
Actually, there is no misunderstanding of assumptions. The proof
basically shows that an ACL system, as typically constructed, can
violate safety in two steps.
Could you design a hybrid to solve this problem? Yes. The point is that
once you see the proof you can stop working on the simple ACL system and
start working on the hybrid. Further, there is a tool (the method of
proof) that will very quickly tell you that whether you are wasting your
time on any given design. That is a useful thing to know in practical
> Hmmm. Perhaps we should clarify the above point a bit. The notion of
> "capabilities as data" still includes quite a variety of potential
> e.g. as discussed in:
> All include both information in the server of the resource as well
> as information from the process requesting access. One extreme
> is essentially an access list mechanism where the server remembers
> which processes (this assumes that it can know where requests
> come from) have the rights to which resources.
The fatal problem with this is that the responsibility for filtering
must not live in the server. Placing it in the server supports denial of
In the current internet architecture, there is no other choice by to
place defense in the server. From this we can conclude that it is
impossible to guard resource in an internet-connected application given
the current design of the internet.
I'm not saying you can ignore the issue. I'm saying that there are
places in the local case system design where you require greater
robustness than is feasible under the constraint space of the internet.
> I believe the network environment is where the security problems relevant
> to the present and future lie and that any effort to restrict the domain of
> discourse to what you refer to as the "simpler (local)" case are no more
> helpful than securing a system by disconnecting it from the network.
On this point we will continue to disagree. Eventually we need to
consider the internet case, but one does not successfully solve big
problems by solving them all at once in their entirety. People have been
taking your approach for 40 years, and they have made negative progress.
Our approach is a bit slower, but it's letting us figure out which
problems in the internet have to have to be solved to give us a long
Both approaches are illegitimate.
More information about the cap-talk