[cap-talk] More Heresey: ACLs not inherently bad
Jonathan S. Shapiro
shap at eros-os.com
Thu Sep 18 11:28:23 CDT 2008
On Thu, 2008-09-18 at 08:52 -0700, Charles Landau wrote:
> Jonathan S. Shapiro wrote:
> I think it does matter. Suppose that "God" decides that Alice should
> have access to object L, while Bob should not. Further suppose Alice and
> Bob can communicate, and Alice wishes to proxy L to Bob. This is a
> problem that neither capabilities nor ACLs can solve.
Yes, and it is also outside the scope of the requirements as stated.
> > The challenge problem does
> > posit that the permissions may change. It does not specify mechanism for
> > that change.
> I take you to mean, the challenge problem does not specify a *policy*
> for changing permissions.
Indeed. The challenge does not state:
Any particular definition of what might constitute a consistent set
of access rules, though it is presumed that such a notion will exist
in any particular use case.
Any policy concerning who is authorized to revise which parts of the
permissions, though it is assumed that such a policy will exist in
any particular use case. It is further assumed in this challenge
question that the task at hand requires that
- All users (clients) appear to traverse identically the same
capability graph at all times, sharing a consistent view of
- The task includes operations that alter the graph structure,
as distinct from merely adding/removing/modifying the leaf
objects. This is important because it reveals why replication and
membrane strategies are probably unsatisfactory here in OS-based
Since the task imposes these requirements, it should be understood
that we are focused on policies that both permit and selectively
authorize such *general* graph updates, as opposed to just leaf
Any particular mechanism by which authorities are specified, recorded,
or enforced. However:
- It is assumed that suitable mechanisms must exist in any real
- Any particular choice of mechanism whose performance or storage
requirements are prohibitive would of course be rejected.
- Any particular mechanism in which revoking access by one party
causes access by a second party to be lost fails the
And I would add, based on a round of discussions a year or so ago, that
if the implementation is unable to determine when reclaiming an internal
data structure might break something, the solution must be rejected,
because inability to reclaim constitutes a priori evidence that the
storage requirements are not boundable and therefore prohibitive.
> Since capabilities are all about controlling permissions, and you are
> not specifying the requirement for that, perhaps this list is not the
> appropriate venue for this discussion.
What capabilities are all about is *expressing* permissions. They
provide mechanism, not policy. What we attempt to control under the
heading of policy is *authority*, and that is done by constructing
mechanisms *on top of* capabilities in most cases. The test at hand is
whether there may be a pragmatically important policy that capabilities
cannot express in any sensible form.
One of the tenets of the capability view -- particularly in the form it
takes on this list -- is that delegation and permission should never be
separated. The challenge problem that I am posing is a test of that
If this isn't the right list in which to pose fundamental challenges to
the tenets of capability-based design, then I'm not sure which list *is*
the right list, but I'm sure that the primary purpose of this list has
come to an end. Not to worry, because as the person who operates the
list I'm perfectly willing to declare by fiat that respectful challenges
to the tenets of capability design are definitely fair game here.
> >> For example, if Oscar has read-only access to leaf object L, and stores
> >> a reference to L in node/directory D, to which Henry also has access, is
> >> it possible that Henry could thereby acquire write access to L? In other
> >> words, can Oscar grant more authority than he himself has? If so, what
> >> security properties can be assured?
> > Yes to all of the above, including the last.
> The last question doesn't admit to a "yes" answer.
Excuse me. You are right. I misread it as "*can* security policies be
assured". The answer is: yes, there exist some security policies that
can be assured, and many in this community will not judge them to be
sensible security policies because they admit social holes such as
proxying and/or insider disclosure.
But the real world can and does operate usefully on such policies,
really does need to be able to express them, and really does benefit
from partial mechanical enforcement, even when that enforcement is
> >> do you want to have a
> >> kernel that handles both (twice as big), or do you want to build one on
> >> top of the other (inefficiency)?
> > I don't buy "twice as big".
> Does the proof of correctness scale linearly with the size of the system?
In my experience, any proof of correctness tends to scale
*exponentially* with the size of the system, but that has nothing to do
with the discussion at hand. The answer to the question I think you are
trying to ask is that the kinds of mechanisms I am thinking about add a
very small amount of marginal code to the system, and that code is
almost entirely reusable by all relevant parties once written.
But also, the challenge question at hand isn't about *my*
implementation. It's about my concern that *no* viable implementation
satisfying these requirements appears to be possible on a pure
More information about the cap-talk