Why Capabilities and Persistence are Essential

Jonathan Shapiro shapj@us.ibm.com
Tue, 9 Nov 1999 14:18:34 -0500

In this note, I want to define some terms, explore the conditions
under which capabilities and persistence become essential, and
identify some general requirements.  By persistence, I mean *process*

For this thread, I am not interested here in a debate about relative
merits with regard to security; only in a debate about when the
*mechanism* of capabilities (or some close equivalent) is required for
functional reasons, and when it must be extended to the store.  If we
can come to agreement that capabilities are required independent of
the merit of the security arguments, then our discussion has been
reduced to "... now that we have capabilities, how far can we push
them and what else (if anything) do we need?", which is the subject of
the next two threads.


First, I want to define what protection mechanisms and security
policies are (albeit somewhat informally), and what requirements must
be satisfied by a protection environment.

Protection Mechanism:

  A protection mechanism is a decision procedure.  Given a process, an
  object, and an operation on that object, it answers "yes" or "no" as
  to whether that process is permitted to perform that operation on
  that object.

Security Policy:

  A security policy is a statement about what decision procedure
  outcomes should be possible in some dynamic execution of the system
  (or rather, about which decision outcomes must be false in ALL
  executions of the system, given an initial condition).

  For our purposes, we can ignore security policies that are logically
  nonsensical in considering the merits of a protection mechanism.  We
  cannot ignore security policies that are economically
  differentiated.  It is a reasonable policy objective to alter the
  cost or required specificity of an attack.  In particular, many
  attacks are facilitated by prior knowledge, so understanding the
  potential spread of knowledge (even if we cannot prevent it) is

The security community distinguishes two types of security policy:
DISCRETIONARY and MANDATORY. Discretionary policies are those applied
volitionally by some user. Mandatory policies are those applied
whether the user wants them or not.  Note that discretionary policies
can usually be overcome by viruses, as there is no way to distinguish
between intent expressed by the user via a well-behaved program and
intent expressed by a virus suborning some program that holds
equivalent authority.

Confinement is an interesting middle ground policy.  It is
discretionary in the sense that the user may elect to run a program in
an unconfined fashion.  It is mandatory in the sense that once
started, the program cannot violate the confinement boundary.  The
existence of such a mugwump policy suggests that the whole terminology
framing of security discussions needs a re-examination.

In general, real users are imperfectly consistent in applying
voluntary policies, it is therefore sometimes necessary to engineer
systems that apply them without allowing the user volition where
extremely sensitive content is concerned.

Trusted Program: 

  A trusted program is one whose operation has been assured (usually
  by hand) and is known not to misbehave.  That is, it is a program
  that we know obeys the intent of its user.  Such assurance requires
  both inspection of the program and constraints on the integrity of
  the environment in which it executes.

  In general, assuring programs is exceedingly expensive.  In secure
  systems, on therefore seeks to minimize the number of trusted
  programs, and assumes that all other programs are untrusted.
  Security policies must be enforced based on the assumption that the
  majority of programs are untrusted.

  A trusted program is a program that can be run safely without
  external controls.


Capabilities are a particular family of decision rules.  given a
capability {o, sig}, where /o/ is some object and sig is a set of
operations on things of type /o/, the decision procedure is defined
in one of the following two ways (depending on the specific system)

           [implicit use design:]
	   allow(p, o, op) :=
                (exists SIG s.t. OP in SIG) and
                ({O, SIG} in p)

           [explicit use design:]
	   allow(p, c, o, op) :=
	        (exists C in caps(P)) and
                (OP in sig(C)) and
                (O == object(C))

In contrast, ACLs use auxiliary tagging information.  The decision
procedure for a general ACL system is

	   allow(p, o, op) :=
                exists U in users(P), exists A in acls(O) s.t.
                   (op in sig(A)) AND
                   (U in users(A))

Because of the additional levels of indirection in the ACL decision
procedure (users(A) and users(P)), the two models are not formally
equivalent as they are customarily implemented.


Capabilities are used as an efficiency in POSIX systems.  File
descriptors are (approximately) capabilities.  The question is
therefore not "do we need capabilities", but rather "when do we need
to store capabilities?"

Stored capabilities are not needed unless process instances are
persistent.  Providing them in the absence of process persistence may
add value, but it is not required to have a useful computing system.
Once processes are saved, however, we require some representation
capturing the decision procedure state that can be tied to particular
process instances rather than to process binaries.  The most efficient
such representation known is capabilities.

That is: there is a major requirements difference in power between a
system that can say "the password *program* runs with extra
authorities" and a system that can say "this program instance runs
with different authorities than that program instance, even though
both run on behalf of the same user."  Once we wish to be able to say
that we need capabilities to be stored, and we open the pandoras box
of issues about whether we will permit their transfer. [It may or may
not still be desirable to have other protection models -- I haven't
gotten to that.]

Another way to say that is this: if all we want is setuid programs, we
can associate attributes with the file image from which the program
runs, either in the form of a setuid bit or in the form of a side
table of exceptional authorities (as in VMS, or using cryptography as
in Java).

It is fairly easy to convince people based on arguments about
functionality that per-process-instance authorities are desirable.
For a simple example, look to wallet programs.  You clearly want your
Schwab trading program to use a different trading wallet than your
Quicken program.  Thus, the requirement for persistent capabilities or
some closely comparable representation can be based on arguments of
function rather than security.


Okay, a few requirements and I'll get on to the challenge problems
where we can enjoy a controversial discussion:

Requirement: When things go wrong, we need to know where to look. In
particular, we desire a system where inadvertant transmission of
authority (either through accident or trojan malice) is rapidly
detected and either the offending user is educated about how to avoid
the mistake in the future or the compromised program is expunged.

Requirement: The above requirement demands that some means of
traceability exist.

Requirement: the fundamental requirement of a protection mechanism is
that it be able to support those security policies that are
enforceable and required by the human being(s) holding administrative
authority over the system.

Jonathan S. Shapiro, Ph. D.
Research Staff Member
IBM T.J. Watson Research Center
Email: shapj@us.ibm.com
Phone: +1 914 784 7085  (Tieline: 863)
Fax: +1 914 784 7595