[cap-talk] Why protected capabilities matter
Jonathan S. Shapiro
shap at eros-os.com
Mon Jul 16 22:33:28 EDT 2007
On Mon, 2007-07-16 at 14:22 -0700, Jed Donnelley wrote:
> Jonathan S. Shapiro wrote:
> > 1. If capabilities are data, it is impossible to ever know what
> > capabilities a program holds. They can be hidden in encrypted or
> > compressed state. If you cannot know what capabilities a program holds,
> > then you cannot know what communications that program may undertake, and
> > you cannot trust the program.
> While I appreciate this argument, if a program has even a single
> capability to some uncontrolled entity then of course you have no
> way to know what real authority the program holds in any case. Also
> as mentioned elsewhere, I'm not sure how much real value this "knowing"
I missed the statement you made elsewhere, but I'm skeptical. The
ability to have subsystems that are "trustworthy by environment
construction" seems critical, since programs of any other sort need
inspection and/or verification. My sense is that confinement is pretty
central to this approach.
To borrow from James' argument: knowing that my editor is confined, and
that only my shell has access to the power box, is a very powerful
defense against hostile incursions of many sorts.
In addition to the technical security argument, the discipline of
constructing systems in this fashion is beneficial in its own right.
> > 2. If capabilities are data, they may be transmitted by covert data
> > channels.
> Sorry, but I don't understand why you referred to "covert" data channels in
> the above. If capabilities are data, can't they even be transmitted
> over overt
> data channels?
Certainly. The reason I raise the distinction is that overt channels can
be restricted, while covert channels cannot. For this reason, the
capability proxy case that concerns me is not "proxy over overt data
channel" but rather "proxy over covert channel". The latter is much
harder to address, and these days can be surprisingly high bandwidth.
> > Protected capabilities resolve both problems **up to** covert proxies
> > (that is: we still need to deal with the covert proxy problem). The
> > mechanism of protection may be partitioning (including type
> > enforcement), user/supervisor separation, or other mechanisms. The key
> > requirement is that user-mutable data is never interpreted as authority.
> Just above is the sentence I'd like to better understand. What is it
> that is "key" about
> not treating user-mutable data as authority? Isn't the 'key' factor to
> the values that you are
> seeking that capabilities (permissions) are kept under tight control by
> the TCB and only
> named by the running program?
Subject to my later qualifier in another message about accessor
capabilities to immutable state, I believe that your statement captures
the requirement for confinable capability systems.
Note that object reference assignment does not constitute user mutation
in the sense that I mean.
> Do you consider this equivalent to the
> that user-mutable data is never interpreted as authority?
For confinement to be supported, yes, that seems to be a requirement.
Various middle grounds can be constructed using approaches similar to
password capabilities, but these are not confinable.
> > 1. You have a confined application.
> By "confined" I assume you mean that explicit communication (I'll also avoid
> covert channels, so no worries there) must be enabled by known capabilities
> (e.g. in a c-list)? Even at that of course, even one such capability,
> e.g. to a
> remote object, can provide means for further expansion of authority without
> explicit capabilities (permissions) showing up in it's c-list.
Yes, that is what I mean. Your comment about one remote capability is
not relevant, since such a subsystem is not, by definition, confined.
Hmm. There is a valid point that you are making. Within the strict
definition, a subsystem can be confined without being observably
confined. I suppose my view is that a program can only be "trustworthy
by virtue of constrained environment" if it is observably confined, and
I therefore tend to disregard the non-observable case. From the strict
standpoint of definitions I suppose that I am misusing the term.
Question: are there any circumstances in which a non-observably confined
subsystem is actually useful? I do not buy your argument about network
vs. local, because there are too many ways to use remote servers to
support local intercommunication between subsystems that are intended to
be mutually isolated. Such things are frequently seen in the wild --
it's not a hypothetical concern.
> > Because it is confined, you know
> > that it's misbehavior is limited to resource attacks, and these
> > can be controlled if the resource management scheme is sensible.
> Can you explain what you mean by a "resource attack" or perhaps point to
> about what you mean? Are you referring to something like denial of
> service attacks?
> I think not, but I'm not really sure what you mean with that "resource
> attack" phrase.
Example: in order to execute, a confined subsystem must be granted some
right to allocate storage and/or make use of CPU cycles. These rights
cannot be inherent in the confined subsystem -- they must be provided at
or after subsystem startup.
In this scenario, the confined subsystem remains restricted by whatever
resource limits the resource abstraction mechanism imposes, but it is
necessarily true that we over-grant such resources in much cases. While
the subsystem remains confined, it is able to fully allocate any
resources whose allocation is permitted. Coupled with over-provisioning,
this provides a basis for denial of resource attacks.
I raised this mostly for the sake of completeness. Note that it is a
sufficient defense to ensure that your subsystem kill agent runs from
committed resource reservations.
> > 2. In order to allow this "safe by virtue of confinement" subsystem to
> > save/load files (and other potentially dangerous actions), you give
> > it a capability to the power box.
> Of course it may have capabilities to some files (e.g. to save/load
> state) as part of it's
> initialization (I assume).
Depends on what you mean by initialization. If you mean that these are
provided by the instantiating program at startup time, then sure. If you
mean that there is some mutable resource that is automatically provided
to the subsystem, not authorized by the instantiator, then no, such a
subsystem is certainly NOT confined.
This raises some interesting issues about configuration bootstrap.
Fortunately the bootstrap problem can be solved without violating the
kind of confinement that I am after.
> > 3. The entire approach relies on knowing that the only way to
> > exercise these potentially dangerous operations is by asking
> > the power box.
> > Note that in the absence of  there is no point to a power box,
> > because we must assume conservatively that the program already possesses
> > dangerous outward communication authority.
> I disagree with the above statement. In the absence of confinement I
> agree that you
> have to assume that the program already possesses dangerous outward
> authority, but you may still wish to limit it's access to dangerous
> local authority.
Since dangerous outward communication authority is trivially
bootstrapped into dangerous *inward* communication authority, I'm not
sure this distinction is buying you anything.
The problem is that once you poke a hole in the subsystem perimeter you
lose transitive control, and you no longer really know what is possible.
> > It is possible to introduce cryptographic monikers that serve as durable
> > object references. Several systems have this. These are not
> > capabilities, because *use* of these monikers entails an independent
> > session authentication protocol; the capabilities are only meaningful
> > within the context of the authenticated session. In reality, these
> > monikers are not capabilities at all. Conceptually, they are names of
> > entries in a per-session capability table that exists on the target
> > system.
> I'm glad you mentioned the above. Perhaps you put the public key
> mechanism described in:
> into the above category of a system that 'conceptually <keeps> names of
> entries in a
> per-session capability table' ?
I don't believe so, for the reasons I identified above. If I understand
it correctly, I have to classify #s13 as an unprotected capability
> I believe there is an opportunity in such schemes to significantly
> simplify the TCB of capability systems (though I've not seen it done
> except by giving up confinement, e.g. as NLTSS did)...
I agree. But I think that if you are prepared to give up confinement you
have given up the primary structuring value of capability systems. At
that point I think we might as well go back to running Windows.
>...while still supporting what I consider the most relevant protections
> of capabilities as descriptors,
> including confinement, but not including the ability to easily identify
> all external permissions
> possessed by a domain
I cannot reconcile "including confinement" with "inability to identify
external permissions". More precisely, I cannot reconcile "observable
confinement" with "inability to identify initial permissions", and I
cannot reconcile "non-observable confinement" with any practical
> Regardless of that issue, I'd certainly like to see these concepts
> clarified and examples
> given in a pubic place. What's wrong with Wikipedia for that purpose
Wikipedia may be a good place to do this, but I don't have a lot of
energy for that.
Jonathan S. Shapiro, Ph.D.
The EROS Group, LLC
More information about the cap-talk