I can't tell if this went out to the list or just bounced -- sometimes Majordomo is a pain in the butt. Apologies if you get in twice.
This discussion seems more appropriate for cap-talk than for the EROS lists, as it's implications aren't limited to EROS. I've taken the liberty of adding each of the original contributors to the cap-talk list.
[ Andre Stemmet wrote: ]
>I think a good way to describe the difference between ACL and capability
>ACL the user has the lock and the object contains a set of keys, one or
>may of which would unlock the "door to the user"
>Capability systems the object has one or more locks and only one key
>exists for each lock. This key may be given to any other object.
Personally, I don't care for the locks and keys metaphor. A lock is generally assumed to have a single key, so the metaphor doesn't really describe what ACL systems do. Also, several ACL systems have "exclude this principal in spite of other rules that might allow them" facilities.
However, if we are going to use this metaphor, lets get the associations as close to right as may be.
In a capability system, each distinct "interface" exported by the object (distinct in the sense that two distinct interfaces have distinct capabilities that name them) has a lock, and all capabilities to that interface are keys to that lock. As in real life, the keys can be copied.
In an ACL system, there isn't really a good analogy. The best I can think of is that every user has a key that is particular to that user, and that you add the user's lock to the object if the user should have access. In contrast to real life, the keys cannot be copied.
[Al Gilman wrote:]
> Capability systems create key objects which have both "open lock" and
> "delegate" methods.
> It is not safe to give a key which can be arbitrarily transferred. You
> want to consider systems where one can authenticate the delegation that
> goes with the key before actually opening the lock.
This is part of a longstanding and involved debate. I think it can fairly be said that Mark Miller sits on one side and (at this point) I sit on another.. The debate is over what policies are feasible. The problem is that such authentication may not, in principle, be possible.
We agree, I think, that IF it is feasible to enforce per-principal policies there are circumstances in which it might be desirable to do so. At the risk of putting words in his mouth, I think Mark would add that the proper circumstances are usually misunderstood, that in the end principal-based access control causes more harm than good, and that it should therefore be omitted in practice. I'm not sure I agree that the "usually misapplied" needs to be true for any essential reason, but I believe it is true in today's commodity systems, and wherever it is true his conclusion that it should be ommitted seems reasonable [ignoring the unfortunate reality that a principal-based mechanism appears to be a requirement to sell real systems].
Mark *has* said on many occasions that capabilities capture those protections that are enforceable in the real world, whereas ACL's assert protections that cannot be enforced in the real world. I do not entirely agree.
The heart of our difference, I think, lies in a difference in understanding about what the goals of protections are, and what security policies are enforceable.
All of us (Mark, Norm, and I) agree that if you give authority to a principal the principal can expose the authority. For example, the principal can proxy for someone else. This leads to our first principle of access control:
Whether the program "wants" to disclose because it is honoring the principal's intent or because it is a trojan horse is something that the computer cannot discern.
We must now explore what is meant by "unrestricted", or rather, what sorts of restrictions we might impose that would limit the propagation of authority. One restrictive mechanism is "confinement", as provided by the EROS constructor or the KeyKOS factory. If a program runs within a confinement boundary it can only disclose things through authorized channels, which the user (or some agent) controls. In practice, in real secure systems, it has been found that users are not successful at managing such restrictions. Some form of assistance is required.
Confinement, while a necessary "primitive", is a much stronger construct than we actually need. The lattice policy, for example, may be good enough in practice to prevent undesired authority leaks, and can be built (as in KeySafe) by labeling of confined compartments [though this is not the usual description]. It is certainly enforceable.
If we are concerned about the behavior of programs (as distinct from the behavior of people), principal-based policies can also be enforced. Provided that appropriate compartments exist from the beginning, a reference monitor can ensure that access rights do not cross these boundaries in the wrong ways, and that transfers across boundaries are selectively rescindable. The reference monitor may allow exceptions where "trusted objects" are concerned. There has been a good deal of work on path-based policies. John Rushby, for example, has a clean set of formulations in "Noninterference, Transitivity, and Channel-Control Security Policies" (SRI International tech report CSL-92-02, http://www.csl.sri.com/reports/postscript/csl-92-2.ps.gz).
This doesn't really contradict MarkM's comment that capabilities provide all of the protection that is enforceable, since KeySafe is built on top of a capability-based mechanism.
2. Trusted vs. Untrusted Programs
One can imagine a hybrid design, in which one must both hold a capability and authenticate the holder to actually use the capability. SCAP (Karger, 1988) is an example of such a design. Monads (UNSW, early) is another. In such systems, an untrusted program can perhaps be given more lattitude to store capabilities. There are two problems here, both associated with authentication:
The long and short of it is that if some programs are trusted (such as authentication programs), and these programs can be shown to be in use at the appropriate times, the arguments about what protections are feasible change in interesting ways.
Jonathan S. Shapiro, Ph. D.
IBM T.J. Watson Research Center
Phone: +1 914 784 7085 (Tieline: 863)
Fax: +1 914 784 7595