Why Capabilities and Persistence are Essential

Mark S. Miller markm@caplet.com
Thu, 11 Nov 1999 00:44:38 -0800


At 11:18 AM 11/9/99 , Jonathan Shapiro wrote:
>The security community distinguishes two types of security policy:
>DISCRETIONARY and MANDATORY. Discretionary policies are those applied
>volitionally by some user. Mandatory policies are those applied
>whether the user wants them or not.  Note that discretionary policies
>can usually be overcome by viruses, as there is no way to distinguish
>between intent expressed by the user via a well-behaved program and
>intent expressed by a virus suborning some program that holds
>equivalent authority.
>
>Confinement is an interesting middle ground policy.  It is
>discretionary in the sense that the user may elect to run a program in
>an unconfined fashion.  It is mandatory in the sense that once
>started, the program cannot violate the confinement boundary.  The
>existence of such a mugwump policy suggests that the whole terminology
>framing of security discussions needs a re-examination.

This way of speaking may be conventional, but I find it strange.  There are 
no users in the box, only programs, objects, and processes.  Of course, 
there are users outside the box, and the security architecture inside the 
box must serve the security needs of the users outside the box, but that 
doesn't necessarily mean that analysis best starts with the users.  Let's 
call the implicit assumption in the above text the "user-centric" view.  The 
alternative might be named the "process-centric" view, if we want to speak 
specifically in an operating system context, or the "object-centric" view if 
we want to be neutral about the implementation technology.  From here on, I 
will contrast only "user-centric" and "object-centric", where by 
"object-centric" I include "process-centric".

Jonathan's virus example motivates well why I prefer the object-centric 
view.  When user MarkM instantiates a program written by programmer Mitnick, 
should we ascribe MarkM's or Mitnick's intent to the resulting object, Bob?  
All user-centric analysis of which I am aware, and the only one I can 
imagine in which the above text can be meaningfully understood, would say 
"MarkM's, perhaps with some qualifications".  

However (as Bill Frantz first pointed out to me), the only accurate answer 
is "Mitnick's, perhaps with some qualifications".  The Bob's behavior is 
encoded by this program, and Mitnick wrote the program.  Bob only serves 
MarkM's intent to the degree that it was Mitnick's intent for it to serve 
MarkM's intent.

What are some of the qualifications?

MarkM, let's say, *chose* to instantiate this program of Mitnick's.  MarkM 
made this choice presumably because of a belief that the program would serve 
some intent of his, so MarkM's intent & belief do enter the picture.  In 
this sense, Mitnick is a source of variation and MarkM, by making this 
choice, a source of selection.  The resulting object is therefore a mixing 
of MarkM's intent and Mitnick's intent.  As Bob (as opposed to MarkM or 
Mitnick) chooses to instantiate programs written by yet other programmers, 
we have a further mixing of original intent.  To try to untangle from all 
this which human user 's intent a given object is supposed to express is 
hopeless.  We need another model.

In the user-centric view, there are multiple separately interested parties 
-- humans -- but they are very slow and large compared to computation.  In 
the object-centric view, every object is potentially a separately interested 
party.  Bob has a relationship to MarkM, Mitnick, and to other objects.  In 
this view, MarkM and Mitnick are simply objects!  Humans are important 
objects because they command much authority: they own the hardware and pay 
the bills.  Overwhelmed by the prospect of analyzing each object as a 
separately interested party?  Subjective Aggregation
http://www.erights.org/elib/capability/ode/ode-protocol.html#subj-aggregate

Now finally, I think I can give clear definitions of mandatory vs 
discretionary security, for objects.  This is directly adapted from the 
"Game Rules" section of the Ode paper.

Objects in a secure system are players of a board game.  For any given state 
of the board, there are moves that a given player may or may not do.  "You 
cannot move Pawn to King 4 because it would put you in check."  This is 
mandatory security.  Of all the moves a player is allowed to make, the 
player, in following the logic of their internal behavior, chooses to make a 
particular move, effecting the state of the board, and therefore the moves 
subsequently available to all the players (including himself).  These 
choices, and the change in allowable choices that results, is discretionary 
security.  "Pawn to Rook 4"

Principle of least authority & confinement are ways for MarkM to impose 
mandatory security on Bob -- constraining what he can do.  What Bob chooses 
to do -- discretionary security -- is still best understood as originating 
with Mitnick rather than with MarkM.


>In general, real users are imperfectly consistent in applying
>voluntary policies, it is therefore sometimes necessary to engineer
>systems that apply them without allowing the user volition where
>extremely sensitive content is concerned.

You're hinting at more than you're saying, and I'm not succeeding at reading 
between the lines.


>Trusted Program: 
>
>   A trusted program is one whose operation has been assured (usually
>   by hand) and is known not to misbehave.  That is, it is a program
>   that we know obeys the intent of its user.  Such assurance requires
>   both inspection of the program and constraints on the integrity of
>   the environment in which it executes.
>
>   In general, assuring programs is exceedingly expensive.  In secure
>   systems, on therefore seeks to minimize the number of trusted
>   programs, and assumes that all other programs are untrusted.
>   Security policies must be enforced based on the assumption that the
>   majority of programs are untrusted.
>
>   A trusted program is a program that can be run safely without
>   external controls.

This may be a distraction from the points you're making, but it's important.

There is an important subjective dimension that the above phrasing skips: "a 
program that *we* know obeys the intent of its user."  We who?  Different 
parties will make different choices about which assurance processes they 
trust, and this is as it should be.  There is no one party, process, or 
assurance that everyone will ever trust, I hope, because there is none that 
trustworthy.  Hence the importance of the separate-TCB model of distributed 
security.

Also, I believe that "TCB" is conventionally used for those parts of a 
system that your security is necessarily wholly dependent on (vulnerable 
to), whether or not it has gone through any such assurance you can trust.  
Intel's x86 chip is in your TCB, but you have no reason commensurate with 
your description above for trusting it.


>** CAPABILITIES AND ACLS IN THESE TERMS
>
>Capabilities are a particular family of decision rules.  given a
>capability {o, sig}, where /o/ is some object and sig is a set of
>operations on things of type /o/, the decision procedure is defined
>in one of the following two ways (depending on the specific system)
>
>            [implicit use design:]
>            allow(p, o, op) :=
>                 (exists SIG s.t. OP in SIG) and
>                 ({O, SIG} in p)
>
>            [explicit use design:]
>            allow(p, c, o, op) :=
>                 (exists C in caps(P)) and
>                 (OP in sig(C)) and
>                 (O == object(C))

I didn't get either of these.  What's "p"?  Does this provide for 
polymorphism?  In object terms, the difference between message selector and 
method.  In KeyKOS terms, the difference between order code and the behavior 
dispatched to.  In both cases, the same selector/order-code sent to 
different receivers will provoke different behaviors.  I'm worried that your 
notion of "operation" does not allow this.


>WHEN CAPABILITY PERSISTENCE IS NECESSARY
>
>Capabilities are used as an efficiency in POSIX systems.  File
>descriptors are (approximately) capabilities.  The question is
>therefore not "do we need capabilities", but rather "when do we need
>to store capabilities?"

Before I had ever learned about capability operating systems, I sort of got 
there independently (the ORSON project a few old timers on the list may 
remember) by (among other things) thinking about Actors and Smalltalk on the 
one hand, and thinking about Unix file descriptors on the other hand.  File 
descriptors sort of had most of the important properties of object 
references: encapsulation (anything could be on the other side of one, with 
state you couldn't access), polymorphism (what it actually did in response 
to a read() or write() request was its business), and a first-class 
anonymous nature (they just exist with a local binding, but no global name).  

But two things were missing.  The obvious one: their protocol was fixed.  
One couldn't define file-descriptor-like things that responded to more than 
read() and write().  Not fatal, because you could write() data describing a 
request.  The fatal one: When you wrote stuff (using write()) over a file 
descriptor, that stuff could only be data -- you couldn't pass file 
descriptors.  Because of this lack, user-processes could not act like 
directories, because they could not respond to an open() request by passing 
back a file descriptor.

So you might first ask "do we need to pass capabilities?".  However, to my 
mind, it isn't a capability if it isn't passable by capability rules (as 
described in the Ode paper) -- whether or not the capabilities are 
persistent.  I believe this is consistent with historical usage.  Mach's 
ports and Spring's whatever are passable by capability rules and are 
considered capabilities, even though their capabilities do not persist.  
Unix's file descriptors are not passable by capability rules and are not 
normally considered capabilities, for which we should be grateful.

I go on about this because, even though I did not understand your earlier 
formalism, it did not seem to imply the capability passing rules.  I could 
of course be confused.


>Another way to say that is this: if all we want is setuid programs, we
>can associate attributes with the file image from which the program
>runs, either in the form of a setuid bit or in the form of a side
>table of exceptional authorities (as in VMS, or using cryptography as
>in Java).

I didn't understand the connection between Java crypto and the rest of this 
paragraph.


>It is fairly easy to convince people based on arguments about
>functionality that per-process-instance authorities are desirable.
>For a simple example, look to wallet programs.  You clearly want your
>Schwab trading program to use a different trading wallet than your
>Quicken program.  Thus, the requirement for persistent capabilities or
>some closely comparable representation can be based on arguments of
>function rather than security.

Isn't the argument about Schwab vs Quicken wallets a security argument?


>Okay, a few requirements and I'll get on to the challenge problems
>where we can enjoy a controversial discussion:

I think "requirement" is a bad term here.  "Desire" is perfectly adequate, 
and allows us to make tradeoffs as we notice that our desires conflict.


         Cheers,
         --MarkM