[cap-talk] Re: "capabilities" as data vs. as descriptors
-OS security discussion, restricted access processes, etc.
jed at nersc.gov
Tue May 4 01:16:23 EDT 2004
At 08:58 PM 5/3/2004, David Hopwood wrote:
>Jed Donnelley wrote:
>>At 08:51 AM 5/1/2004, David Hopwood wrote:
>>>Jonathan S. Shapiro (by way of Mark S. Miller <markm at caplet.com>) wrote:
>>>>On Mon, 2004-04-26 at 23:18, Jed Donnelley wrote:
>>>>>>>I'm not sure what you mean by the "safety problem".
>>>>>>Harrison, Ruzzo, and Ullman 197.
>>>>I think it was Communications of the ACM. It's not on-line, to my
>>>It is on-line at
>>Ah yes, I remember it now. Thanks. I've review it to at least pick up
>>on that "safety" terminology.
>>Regarding the issue of the "safety problem" as discussed in the above
>>paper I can see that it is again referring to the issue that has come up
> > several times in this thread. Namely the attempt to share a resource access
> > right from one process (domain) to another in such a way that the receiving
> > process is unable to share it further with other processes that it can
> > communicate with.
>I don't think that's an accurate characterisation of either "safety", or
>the ability to support confinement that we've been discussing in the rest
>of this thread.
>"Safety" (which is a misnomer, but I'll stick with the HRU terminology)
>is defined in the above paper as follows:
># We shall now consider one important family of questions that could be asked
># about a protection system, those concerning safety. When we say a specific
># protection system is "safe," we undoubtedly mean that access to files
># without the concurrence of the owner is impossible. However, protection
># mechanisms are often used in such a way that the owner gives away certain
># rights to his objects. Example 4 illustrates this phenomenon.
>[Example 4 is about Unix 'group' and 'world' permissions.]
># In that sense, no protection system is "safe," so we must consider a weaker
># condition that says, in effect, that a particular system enables one to
># keep one's own objects "under control." Since we cannot expect that a given
># system will be safe in the strictest sense, we suggest that the minimum
># tolerable situation is that the user should be able to tell whether what
># he is about to do (give away a right, presumably) can lead to the further
># leakage of that right to truly unauthorized subjects. As we shall see,
># there are protection systems under our model for which even that property
># is too much to expect. [...]
We both read the same text. It seems to me that what he is discussing is
the "further leakage" of rights given out (passed to a program). Confinement
not in the sense of data but in the sense of rights. Or, as they summarize the
notion of safety earlier in their paper, "Basically, safety means that an
subject cannot pass a right to someone who did not already have it".
In my view if you have an unreliable subject with a right that you are worrying
about it inappropriately passing to someone who does not already have it,
then you already have a problem (the unreliable subject with the right).
Trying to restrict it from passing that right to "someone" else (who can
be considered part of the computation done by the 'unreliable' subject
in my view) I don't believe is helpful.
>The relevance of this to confinement, is that confinement gives a way of
>expressing that programs outside a particular subsystem are definitely
>"truly unauthorized" for some set of rights.
>In typical ACL systems, OTOH, there is no way to express this: any
>program run by a user can transfer a right owned by that user to any
This is if you think of ACL systems as being restricted to having users
(people) as the subjects. The ACL systems in DCCS and Managing
Domains are clearly not in that category. In those cases the subjects
are computers (CCSs in DCCS) or processes (Managing Domains).
>So the user is not in control of whether the authority
>they grant can be "leaked" further; this control is shared among all
>the programs that they run.
From my perspective the user is not in control of something much more
fundamental - namely the ability to limit the rights of the process running
on their behalf to begin with. If a process run on behalf of a user has it's
own identity separate from that of the user and the user must grant to that
identity any rights the process needs, then one could (I don't recommend
this, I am just trying to get at the fundamental issues, perhaps terminology,
that seem to be separating our focus in this discussion) both limit the
rights of processes run on behalf of a user and could insure that any
such process could only share rights that it already had (only those for
which IT is on the ACL).
>The HRU model correctly treats these programs
>as separate subjects that may be "unreliable" (for example because they
>are confused by unexpected input), and also correctly distinguishes between
>these unreliable subjects, and "truly unauthorized" subjects that the user
>does not intend to have the authority. This maps directly onto the situations
>that cause most real-world security problems -- for example, an e-mail client
>should be considered unreliable, but should still be granted ability to read
>a user's address book, whereas a Trojan Horse e-mail attachment would be
>"truly unauthorized" to read the address book.
In my view each program must at least be granted the rights to the resources
that it needs to do its work. It should be granted no more (POLA). With those
it needs, it can mess them up - either directly or, as I argue it should be
to do in order to implement its work, indirectly.
If we can get to a POLA world I believe we will have addressed by far
the majority of the relevant problems. I don't believe that authority
confinement (limiting rights communication by a process that can
already communicate data) adds significantly to the value of
a secure computing model, but I do believe that attempting to
do limit such authority communication does significantly complicate
any effort at a network capability model and implementation. Therefore
as I say, part of the problem and not part of the solution.
>It is often taken as an unquestioned principle that when reasoning about
>security, it is always "safe" or "conservative" to assume that programs
>outside the TCB are hostile, even though they may not be in practice.
>IMHO this is dangerously wrong, and is responsible for much confused thinking
>about security models. In this case, it leads to an incorrect conclusion. If
>we assume that all programs outside the TCB are maximally hostile, then there
>is no point in limiting direct transfer of authority, because a hostile
>program would bypass this by proxying the authority. If, OTOH, we make the
>more realistic assumption that the programs are partially trusted, but may
>have exploitable security flaws, then it becomes obvious that *how* they can
>transfer the authority they are given is important. If they can only transfer
>it by proxying, and a particular program in fact does not act as a proxy,
>then it does not transfer authority. Although it is possible to
>unintentionally act as a proxy as the result of a security flaw, that
>appears to be much less common that other kinds of flaw which would be
>prevented by confinement.
To me it seems nearly as unlikely that a program would inadvertently
a right than that it would inadvertently proxy a right. More likely if we
about inadvertent misdeeds would be misplaced direct access. In that
case if you are able to enforce POLA you are doing about as well as
you can. I believe that if we can achieve POLA, even with the ability of
a process with a right to share that right, we will be in a far more secure
world than we are in today where processes run with all the rights of their
human initiators (users). Trying to limit the communication of a right that a
process must have by POLA continues to seem to me as counter productive.
In a sense more trouble than it is worth.
More information about the cap-talk