Re: Communicating Conspirators Ka-Ping Yee (ping@lfw.org)
Sat, 20 Nov 1999 22:18:45 -0800 (PST)

On Thu, 18 Nov 1999 shapj@us.ibm.com wrote:
>
> I think that the philosophical problem with ACLs is not that they describe
> unenforceable policies (they do not), but rather that tagging programs with
> something called a "user id" conveys a deeply misleading intuition about
> what policy and protections are actually being enforced by the mechanism;
> the reality has nothing to do with users.

Indeed. It seems to me that an ACL system can be modelled as a capability system where the only manipulable capabilities are capabilities to usernames and passwords, i.e. authority-carrying user identities. Is this an accurate statement?

If i may be allowed to proceed from here, then, it is something like an extremely coarse-grained capability system: either you can transfer every authority you have (by giving someone else your password) or none at all. This leads me to a point that occurred to me some time ago but i never got around to saying it, because i wasn't sure if it was an intelligent or stupid thing to say: this lack of fine granularity is actually a deterrent to transferring authority. Since people are inclined to think twice before giving away all of their authorities, they might not do it at all. It's not that people *can't* transfer their identities; it's that they don't *want* to.

This creates the perception of a solid boundary. This may be perceived as a "feature" of ACLs that capabilities don't have. Now, you may call this feature illusory in a system where users can invoke specific authorities on behalf of friends, but (a) not all systems provide this programmability and (b) it's just hard to do for most people. Strictly speaking the boundary is not solid, but in practice maybe it can be made almost arbitrarily solid; and once you consider that boundary to be solid, ACLs start appearing to enforce restrictions that capabilities cannot.

Put another way:

People sometimes talk about security in terms of comparative cost: the system can be considered successful if the cost of defeating it is sufficiently high (and *known* to be sufficiently high) that is no longer profitable to try. The deterrent can be increased if compromising the system is made more *expensive* or more *difficult*. I am suggesting that we also need to consider a third possibility: what if people choose not to compromise the system because it is too *dangerous* to themselves?

The above is not an argument for or against capabilities or ACLs; it's merely an attempt to analyze what it is that may seem so seductive about ACLs and maybe help explain source of what Ralph Hartley sees as a deficiency in the capabilities model.

Am i making any sense?

"In the sciences, we are now uniquely privileged to sit side by side with the giants on whose shoulders we stand."