[cap-talk] What are caps good for? "Encapsulation"? POLA
vs. confinement - long, but with some meat
jed at nersc.gov
Mon May 10 18:48:56 EDT 2004
At 08:20 PM 5/7/2004, David Chizmadia (JHU) wrote:
> A few thoughts...
> > At 01:38 PM 5/7/2004, David Chizmadia (JHU) wrote:
> > >Perhaps the better term would be "Security Encapsulation".
> > Would that be referring to confinement or POLA?
> Actually neither. The comment was intended to suggest a
>different term (along the lines of the way that MarkM uses
>"object capability" to distinguish the particular form of
>strong capabilities that he advocates) for the notion of
>"confinement" that I believe most people on this list are
> > I see them as quite distinct.
> As do I. But I believe that they are symbiontically
Perhaps I can clarify my views on that in the discussion
> > 1. POLA refers (for me) to limiting any agent to just
> > the rights that it needs to carry out the task that it
> > has been requested to perform.
I can see the above was one of those situations where
my statement could reasonably be read to include
the right to communicate.
> Indeed. Whether the "A" means Access or Authority, I
>agree with that statement. But certain invariants must
>hold true in the POLA enforcement system for those limits
>to mean anything in practice. One of those invariants is
>that those authorities can only be *given* by an agent,
Hmmm. I certainly agree that it's up to the process that
has the resource access to decide what resource access
it wants to communicate to others. In that sense "given". I
suspect you aren't trying to distinguish between the case of
a process being sent a resource (e.g. in initialization or as
part of an invocation message) vs. a process essentially "asking"
for additional resources (e.g. the open file box dialog that has
been discussed in this extended thread). I expect we agree
that either or any other sort of communication is fine as long
as agents can be confined to POLA by those who make use
of them (either by something like running an application or
my a more distinct mutual suspicion communication boundary).
> > 2. Confinement refers to the effort to keep any agent
> > from communicating any data or rights beyond itself.
> Hmmm. Therein is our disagreement!
Do you disagree with that definition of "confinement"?
>As Shap explained
>confinement to me in the context of EROS, it is actually
>about ensuring that an agent can only interact with the
>outside world through its specified interfaces - *AND THAT
>THE OUTSIDE WORLD CAN ONLY INTERACT WITH THE AGENT THROUGH
>THOSE SAME INTERFACES*.
Hmmm. I hope we aren't doing what I would consider hair
splitting over things like the owner of a process (e.g. for
debugging purposes) having the right to access a processes
memory space and capabilities. However, beyond that,
for normal communication, I think we agree that the
boundary between processes (domains, agents, whatever
you want to call the communicating subjects) is one of
controlled mutual suspicion. Communication can only
happen when the sender requests communication to the
receiver and the receiver requests communication to the
sender. The only thing communicated is what the sender
sent - including data and rights (which may be the same
thing - in any system).
>This guarantees that the agent has
>sole control over the authorities that it chooses to share
>with other agents.
Except for the "unusual" owner/debugger/etc/ of the agent.
If you consider that a significant issue, let me know.
>As an unrelated additional benefit, it
>also provides the creator of the agent's controlling software
>with reasonable assurance that intellectual property embedded
>in that software is protected from disclosure. Notice that
>this definition accounts for wallbanging - which is really
>just another interface - since the wallbanging only works
>with the cooperation of the agent (that is inside). It also
>accounts for proxying, which would also have to happen
>through the interfaces.
From the sound of the above I don't see any areas of disagreement
regarding goals. Regarding terminology (e.g. below) I certainly think
we should not try to usurp a very (!) long standing term like
> I believe that this is a good definition of confinement,
>but this thread has led me to believe that Lampson's
>definition may be unnecessarily more restrictive.
In Lampson's notion and a great (!) deal of related discussion
the term confinement, I believe, means what I stated above,
Confinement refers to the effort to keep any agent from communicating
any data (or rights) beyond itself.
Here I've parenthesized the "or rights" because I don't think that
distinction is typically included in the discussions, but I expect most
would feel that limiting rights communication via confinement is
at least as important as limiting data communication.
The central concept regarding confinement is that:
1. A "requesting agent" chooses to share data and/or
rights with another "serving agent".
2. The requesting agent desires to "confine" the serving agent
in the sense that it wishes to insure that none of the data
and/or rights passed to the serving agent CAN ESCAPE
THE SERVING AGENT. Even if the serving agent wishes
to further share the data and/or rights (e.g. to communicate
with yet another agent that may be able to help with part of
the requested service), some mechanism in the environment
makes it impossible for the data/rights to be shared further.
"Wall banging" breaks confinement by effecting a so-called
"covert" channel between the wall banger and another wall
banger that wish to communicate but for which an effort is
being made to stop/restrict their communication. A channel
outside the realm of the explicit "overt" channels that the
system environment tries to provide.
>So I am
>proposing that the objected-oriented notion of encapsulation
>is very close to the sense of my definition above, so why
>don't we attach that definition to the term "security
> > I argue that POLA is the bedrock upon which all security/
> > integrity mechanisms need to be based.
> I would phrase the sentiment as "POLA is the standard by
>which the usefulness and effectiveness of every security/
>integrity mechanism must be judged". When I'm able to act as
>a security engineer, this is in fact how I apply POLA to my
Hmmm. I wonder if there is some terminology that's still
missing that might help with this communication. Let me try
some and perhaps somebody else can chime in with historical
perspectives. As I recall the most common phrase used in this
context is the notion of "mutual suspicion". That is, Alice has
some rights and Bob has some rights. Alice wishes to make
a request of Bob that for Bob to carry out requires some rights
that Alice has but Bob may well not have. The service may also
require that Bob has rights that Alice doesn't have (including,
for example, a right to communicate in a classical capability
sense - parenthetical phrase added for discussion below).
When Alice makes her request of Bob she sends data/rights
across the communication link. Bob can send data/rights back.
So far, mutual suspicion. Alice is protected from Bob - except
for what she specifically and volitionally choose to share. Bob is
protected from Alice except for what he specifically and volitionally
chooses to send back.
Where "confinement" comes in is when the question comes up as
to what Bob is allowed to do with the contents of the request that
he receives from Alice. For example, there is this idea of a "don't
share" bit or control on any communicated right. That is, it's OK
for Bob to exercise the right (e.g. apparently Bob is allowed to
communicate to Carol making a request that exercises the right
(referring to: http://www.waterken.com/dev/YURL/Definition/ )
but Bob is not allowed to share the right via a message to yet
another agent. That would be rights communication disallowed
by confinement. Similarly with data except there it typically
assumes that the Bob agent is not allowed to communicate
outside itself - though how a serving agent like Carol plays in the
mix is a bit difficult to understand (most working on classical
confinement would probably argue that Carol is part of a
TCB - Trusted Computing Block - like a kernel and falls outside
the constraints of confinement).
> > POLA must rest on a mechanism for restricting rights
> > communicated to agents.
> I know from your previous comments in this thread that
>this is *NOT* what you meant to say!
No? I think it is. Namely mutual suspicion. Specifically that
Alice can restrict what she chooses to communicate with Bob
when she communicates to Bob. She can pass just those
rights (and data) that meet the requirements of POLA.
Ah, perhaps you interpreted the above to mean that POLA
rests on the ability to restrict the exercise of rights passed
to agents. Then I would say no. Any agent passed a right
certainly needs the ability to exercise it. Otherwise, why
pass the right?
> I think that what you meant to say is that: "POLA must
>rest on a mechanism for restricting the exchange of rights
>among agents to only the existing communications channels
>they share". I suggest that the name for this mechanism is
>"Security Encapsulation": for which there may be multiple
Hmmm. I believe we're worrying terminology, but I'll
continue the effort to try to insure that no functional
distinction slips through. I do this mostly because I
think there may be an important aspect of the communication
that could be an issue here. Let me clarify.
In many IPC mechanisms, particularly what I
refer to as the classical capability models (including
certainly EROS and KeyKOS, but also including the
original Dennis and Van Horn model as on the PDP-1
and even the "port" models - e.g. Demos and Mach) the
right to communicate with a server is typically inextricably
bound to the right to access a resource serviced by the
server. That is, if I'm a process and my only capability
is that to a single file, I am allowed to communicate to
the file server for that file and no place else.
This model makes a lot of sense. In some ways it
fits the POLA. Namely, if I don't have, say, a directory
capability, why should I have to right to communicate
with the directory server?
Contrast this with the typical communication mechanism
on today's computer networks. In TCP networks (at least)
there may be restrictions on communication (e.g. IP address
or port blocking), but generally if a process can communicate
on the network at all then it can communicate "freely" on
the network (analogous to the way processors can communicate
on the network). Certainly there is no binding between
the facility (right?) to communicate with any other higher
level right (e.g. access to a file or directory as above).
I argue (separable from the discussion about interfaces
that I see as distinct from this topic) along the line of
the "the network is the computer" thinking, that what's
suitable (available, possible) at the network level between
processors is also what is suitable at the application
level between processes.
I argue even more strongly that any thought about
confinement (i.e. that I can send data/rights to a
process confident in the knowledge that the receiving
process will be unable to further distribute my
data or rights) is ineffective, dangerous, and likely
futile. As we have discussed, a receiving process
can likely wall bang to get out data and proxy to
forward resource access. However, from a practical
viewpoint I believe the more relevant issue arises
from the considerations of how to support confinement
on a network. If the process that I am sending my
message to is running on another computer on
a network, then my message (data and rights)
is received by that other computer. That computer
received all my data and rights. IT has the ability
(even if it tries to restrict communication rights
for the processes that it runs internally) to forward
(redistribute) my data and my rights anywhere that
it is able to communicate (over the "whole" network
Why should I trust IT (e.g. whatever TCB, kernel,
or network capability mechanism) any more than I
do the server that I'm sending my request to? I
don't believe I should. Both should be subjected to
POLA in the same sense. The only sense that I
believe is relevant is that I can restrict IT (any
agent I make a request of) to use the data and rights
that I sent to it in any way it sees fit. If it chooses
to forward my data and/or rights, that is its choice.
I must trust it with that choice. What I shouldn't
have to trust it with is any data and/or rights that
I don't choose to send to it. That is the sense in
which I believe POLA restrictions are effective in
computer systems - including networks.
One more point on this topic. Consider the situation
that exists if one has multiple divergent but "classical"
(bind communication rights with resource access) capability
systems on a network along with other systems that aren't
classical capability systems (e.g. today's Unix or Windows
systems, or capability as "data" systems like Amoeba
and NLTSS). For a given classical capability system one
can design a communication protocol (e.g. like DCCS:
or like the Mach network server:
) that would allow its capabilities to be shared with another
system of the same type. One could even imagine a protocol
that would allow all classical capability systems to share
capabilities. However, now imagine how such systems would
share capabilities with today's traditional systems or with
systems that implement capabilities as a protocol on
top of data. While the classical capability systems may
be fooling themselves if they think that any notion of
confinement (perhaps what you are call encapsulation
David?) is effective even when dealing with other classical
capability systems (the other kernels have a wider ability to
communicate and exercise rights as above), it becomes
even clearer what the issues are when dealing with the other
types of systems.
Such considerations led me to believe that the "right" way
to manage rights communication is a way in which processors
and processes are equivalently trusted. A way in which
a single protocol suffices that can be supported at the
application level and doesn't require any special trust of
a kernel or any separately trusted capability distribution
mechanism. These are the sorts of mechanisms I
restricted myself to in:
While I still think the interface issues are the most substantive
barriers at this time, I also think it unlikely that any
rights communication mechanism will go very far without
being what I would call "faithful" to the nature of networking.
> > I believe that the capability model (though I generalize
> > it to the notion of communicating rights in messages, not
> > tied to any particular implementation like any sort of
> > descriptor model) is the most parsimonious and effective
> > model for POLA restrictions.
> I share this belief.
Let's see if we have enough common ground on what
the "capability model" means after the above discussion.
While I understand and accept that the classical capability
model binds the right to communicate with the right to
resource access, I believe that model cannot be effectively
(as above) extended across a network. I believe the model
that makes sense is one in which it isn't communication
that is restricted by capabilities, but rather rights to
"resources" (OK, I wasn't considering a network a
resource in that sense).
> > I argue that efforts at confinement are ineffective
> > (proxying can violate them whether through overt or
> > covert channels) and counter productive (efforts to
> > restrict an agent from further sharing its resources,
> > again hopefully by POLA, to other subagents works
> > against the very security/integrity model that we
> > hope to foster).
> Using the restricted definition of confinement you
>gave above, I would agree with your statement, but I
>would counterclaim that a failure to implement Security
>Encapsulation in a capability system would prohibit
>effective enforcement of POLA.
Let's see if we understand what such "encapsulation"
means after the above. Namely, does it include the
notion that I can assume that if I send data and rights
to another agent I can (in any circumstances) assume
that, even if it is untrustworthy, it will not be able to
share my data and rights with others?
> BTW: I think that your notion of the "Inalienable
>Right To Share Authority" is probably misnamed. In the
>context of this discussion, it would be better phrased
>as "The Inevitable Ability To Share Authority" ;-)
In some ways I think the difference is whether you feel
the "right" is desirable or not. Those who believe that it
isn't might prefer the "Inevitable" term. I believe that the
freedom to share rights, specifically to further subdivide
work within separate, mutually suspicious domains, is
an important "right" upon which to base a system of
communicating agents. Without that "right" (which, as you
say is pretty much inevitable anyway), the efforts to restrict
such communication end up making things more complex and
work less well - e.g. at the points where there is a legitimate
(but perhaps unforeseen) need for further sharing. Any
models that try or even consider restricting that right I
believe can never be effectively faithful to a network
implementation. I say, give up the desire, embrace
the network directly and efficiently, and focus on
POLA (without considering communication as a separate
Now that I've done all this writing I can see that I may
have been unclear in what I was referring to as POLA
in my other messages. I didn't include communication
rights, but others may well have (quite reasonably).
> There are many cases where it would make life much
>easier to restrict authority sharing, but implementation
>constraints (and those damned user needs ;-) make it
>inevitable that authorities will be shared. The best we
>can hope for is the opportunity to trace the path(s) by
>which they were shared.
Trace the paths? I'm afraid that would get us off into
another whole direction. Remember, I'm considering
the network model. How do you propose "tracing" the
path by which rights are shared? Wall banging can
occur, protocols can change, bits can flow through other
shared resources like storage devices, etc. I believe
such "tracing" is problematic. I prefer to focus on the
problems that can be simply, effectively, and directly
solved. I believe that limiting rights communication to
volitional acts that can adapt to POLA needs (discounting
communication rights) is the best we can hope for in
a network world. Still, I believe that with even that
facility our situation would be so (!) much better than
the situation we are currently in (generally trusting any
agent with all of a "user"'s rights) that I'm doing everything
I can to move things in that direction.
More information about the cap-talk