Why Capabilities and Persistence are Essential

shapj@us.ibm.com shapj@us.ibm.com
Thu, 11 Nov 1999 10:04:33 -0500


Various followups to MarkM's comments, mostly in agreement, some in
clarification.

> There are no users in the box, only
> programs, objects, and processes.

This is true, but lends itself to a failed understanding of intent. There
is an initial set of authority associated with every account by the login
program. The login agent and the initial shell must both be trusted in any
meaningfully secure system. Because these are both trusted programs we may
think of the authority they wield as held by the user and wielded
consistent with the users intent. The question is then to understand what
happens as untrusted programs get executed.

> [Mandatory vs. discretionary]

>From the perspective of a given running program, all of the access controls
that constrain its behavior are mandatory.  If they weren't, they wouldn't
be access controls.

In the literature, the terms "mandatory" and "discretionary" capture the
distinction between access controls volitionally applied by the user (or if
you prefer, by some program operating on behalf of and at the direction of
the user) and access controls that are applied independent of the volition
of any program executed (transitively) by the user.  That is, mandatory
controls are prescriptive: "no matter what the user's programs say, the
following actions are not permissable."  Discretionary access controls can
further narrow the set of permitted actions, but cannot expand it.  The
"moves in a board game" analogy is interesting, and I am going to think
about it.  It's a way of looking at things I hadn't considered, and I want
to consider it a bit before I agree with it.

I personally think that the mandatory/discretionary distinction is not well
thought out.  In some sense, mandatory and discretionary access can be
thought of as nested scopes, and the security community has for some reason
declared "2" to be the right nesting depth.  I see no reason why this
should be, given that there are multiple interested parties.

> When user MarkM instantiates a program written
> by programmer Mitnick, should we ascribe MarkM's
> or Mitnick's intent to the resulting object, Bob?
> All user-centric analysis of which I am aware,
> and the only one I can imagine in which the above
> text can be meaningfully understood, would say
> "MarkM's, perhaps with some qualifications".

I believe you. :-)

First, I caution that your statement betrays a fairly complete lack of
familiarity with the serious security literature.

In any case, I think you are battling a straw man, because you are engaged
in the assumption that what is going on is user-centric analysis.  Within
the security community, only trusted programs are assumed to obey their
alleged behavior (which may or may not be to obey the intent of the user).
Other programs are assumed compromised at all times, and it follows
immediately that the user-centric analysis you propose is ludicrous.  The
user identity serves to narrow the scope of potential damage along an
additional axis.  It also serves to tie the execution to someone in the
sense that there is someone you can go to and say "Can you help us figure
out what you were running so we can exterminate the compromised program?"
Traceability is not culpability.

> ... (as Bill Frantz first pointed out to me), the
> only accurate answer is "Mitnick's, perhaps with
> some qualifications".

Bill's statement is not right.  To see why, some notation is helpful:

     a/x  => the intent of a as mediated by program x
     a/x/y     => the intent of a as mediated by program x in turn mediated
by program y

when the program is trusted we may elide it.  This notation is nonstandard,
but the more standard notations become cumbersome when trying to look at
combinations of programs.  If a program is trusted, it can be dropped.

The fundamental leverage of a capability system is that for any user A and
program X, we know that A/X <= A.  That is, authority is only reduced [I
ignore for a moment the issue of programs that hold authority the user
doesn't hold.]  While this has been known for some time, the proof that it
is true is decidedly NOT obvious; it's a fairly hard-won lemma out of the
work Sam and I did.

The problem with Bill's statement is that the program X has no authority of
it's own; it has only such authority as A may convey to it.  Therefore,
while the intent of the actions should be attributed to X, the *scope* of
those actions is defined by A.  One therefore cannot drop either user or
program in attempting to tease out issues of intent.

Going back to my previous comment, though, I think intent puts the issue in
the wrong framing.  The question is not culpability but traceability.  The
goal is not to decide who was a bad individual, but rather to decide which
program needs repair or possibly which user needs education.


Onwards to other uses of the notation.

The basic problem with a pure ACL system is that  A/X == A.  The basic
advantage is that either (A(B/X/Y)) is either A or B, but not an admixture
of the two.  The absence of admixture is an advantage in practice.  The
flaw is that user identity is not a good "one size fits all" basis for
restricting admixture.

The basic problem with a capability system is that when A runs a program
authored by B, the authority of that program is A/(B/X), which may include
the total authorities of A and B.  Granting that there are times when we
want this, it's important to remember that users are not computers and
empirically cannot be relied on for dilligence.  Mandatory access controls
are therfore required.  MarkM argues that admonition systems are the best
you can do.  The problem is that users rapidly learn to ignore them.  How
many times do you read the stupid message box before clicking "okay" or
hitting return in M$ Windows?

It may well be (indeed, I think it is) that we need something considerably
finer than compartments the size of users here, but we certainly need
compartments that are "user sized" in some situations.

More fundamentally, we need subject-oriented ways to slice authorities in
addition to object-oriented ways (subject == process, I'm using the terms
per the access matrix model).

>>In general, real users are imperfectly consistent in applying
>>voluntary policies, it is therefore sometimes necessary to engineer
>>systems that apply them without allowing the user volition where
>>extremely sensitive content is concerned.
>
>You're hinting at more than you're saying, and I'm not succeeding at
reading
>between the lines.

I'm saying that users make mistakes, and that as (e.g.) the owner of a
company I may wish to deprive my employees of certain opportunities to make
mistakes.  Even if I don't eliminate all possible mistakes, eliminating the
likely ones remains beneficial.


>There is an important subjective dimension that the above phrasing skips:
"a
>program that *we* know obeys the intent of its user."  We who?  Different
>parties will make different choices about which assurance processes they
>trust, and this is as it should be.  There is no one party, process, or
>assurance that everyone will ever trust, I hope, because there is none
that
>trustworthy.

Well, I don't want to get into whether there is one originating party, but
for practical purposes we may trust both natural law and mathematics.
Programs whose behavior has been formally shown to conform to specification
may be trusted by all parties to do what they are supposed to.  The proof
itself may need to be vetted by many parties, but in the end it is either
correct or incorrect.

This doesn't contradict your "many TCBs" model.  It provides a basis on
which it can stand.

I shared the impression that TCB usually stood for the "universally trusted
set" of function for a long time.  I've since come to learn that TCB has
been used for some time in the security community in a program-centric way.
The universal TCB is sometimes referred to as the "system TCB" when clarity
of distinction is needed.

>>            [implicit use design:]
>>            allow(p, o, op) :=
>>                 (exists SIG s.t. OP in SIG) and
>>                 ({O, SIG} in p)
>
>I didn't get either of these.  What's "p"?

P is the process performing the invocation.

> In both cases, the same selector/order-code sent to
> different receivers will provoke different behaviors.  I'm worried that
your
> notion of "operation" does not allow this.

An operation /op/ is an operation on some particular object instance /o/.
If the same selector on the same object yields different behavior at
different times we are not doing the capability model any more.

> To my mind, it isn't a capability if it isn't
> passable by capability rules.

I understand. However, I urge you not to corrupt 30 years of terminological
clarity.  They are still capabilities if they cannot be passed or stored.
We agree that systems which cannot pass or store capabilities are crippled.
I think it is accurate w.r.t. the literature to say that such systems are
not called "capability systems".  That is, not all uses of capabilities
occur in capability systems.

> Unix's file descriptors are not passable by capability rules and are not
> normally considered capabilities, for which we should be grateful.

Actually, they are passable.  Consider the I_SENDFD operation in System V
Streams

>I go on about this because, even though I did not understand your earlier
>formalism, it did not seem to imply the capability passing rules.  I could
>of course be confused.

Whether capabilities are passed as a consequence of performing the
operation is orthogonal to whether the operation is permitted.  I was
trying to capture only the authorization check, not the operation
semantics.

>>Another way to say that is this: if all we want is setuid programs, we
>>can associate attributes with the file image from which the program
>>runs, either in the form of a setuid bit or in the form of a side
>>table of exceptional authorities (as in VMS, or using cryptography as
>>in Java).
>
>I didn't understand the connection between Java crypto and the rest of
this
>paragraph.

Hmm.  I said it pretty badly.  Here's another try:

Setuid is a mechanism for binding a particular bundle of authorities --
those associated with a user -- with a binary image.  Java-style
capabilities (I don't endorse their use of the term) also bind authorities
to binaries, though they perform the association using digital signatures.
The point is that these methods only get you so far; neither method is good
enough to provide different instances of programs with distinct
authorities.

> Isn't the argument about Schwab vs Quicken wallets a security argument?

Yes and no.  It's security in the sense that we are protecting the assets.
It's functionality in the sense that even if we don't do the protection
part we still need instance-specific mechanisms to capture the user's
intent.  Security is something whose success is invisible to the user.
Expressing what the user means to do is something the user can see
immediate value in having.


Jonathan S. Shapiro, Ph. D.
Research Staff Member
IBM T.J. Watson Research Center
Email: shapj@us.ibm.com
Phone: +1 914 784 7085  (Tieline: 863)
Fax: +1 914 784 7595