[cap-talk] Re: "capabilities" as data vs. as descriptors - OS
security discussion, restricted access processes, etc.
david.nospam.hopwood at blueyonder.co.uk
Mon May 3 23:58:44 EDT 2004
Jed Donnelley wrote:
> At 08:51 AM 5/1/2004, David Hopwood wrote:
>> Jonathan S. Shapiro (by way of Mark S. Miller <markm at caplet.com>) wrote:
>>> On Mon, 2004-04-26 at 23:18, Jed Donnelley wrote:
>>>>>> I'm not sure what you mean by the "safety problem".
>>>>> Harrison, Ruzzo, and Ullman 197.
>>> I think it was Communications of the ACM. It's not on-line, to my
>> It is on-line at
>> Essential reading.
> Ah yes, I remember it now. Thanks. I've review it to at least pick up
> on that "safety" terminology.
> Regarding the issue of the "safety problem" as discussed in the above
> paper I can see that it is again referring to the issue that has come up
> several times in this thread. Namely the attempt to share a resource access
> right from one process (domain) to another in such a way that the receiving
> process is unable to share it further with other processes that it can
> communicate with.
I don't think that's an accurate characterisation of either "safety", or
the ability to support confinement that we've been discussing in the rest
of this thread.
"Safety" (which is a misnomer, but I'll stick with the HRU terminology)
is defined in the above paper as follows:
# We shall now consider one important family of questions that could be asked
# about a protection system, those concerning safety. When we say a specific
# protection system is "safe," we undoubtedly mean that access to files
# without the concurrence of the owner is impossible. However, protection
# mechanisms are often used in such a way that the owner gives away certain
# rights to his objects. Example 4 illustrates this phenomenon.
[Example 4 is about Unix 'group' and 'world' permissions.]
# In that sense, no protection system is "safe," so we must consider a weaker
# condition that says, in effect, that a particular system enables one to
# keep one's own objects "under control." Since we cannot expect that a given
# system will be safe in the strictest sense, we suggest that the minimum
# tolerable situation is that the user should be able to tell whether what
# he is about to do (give away a right, presumably) can lead to the further
# leakage of that right to truly unauthorized subjects. As we shall see,
# there are protection systems under our model for which even that property
# is too much to expect. [...]
The relevance of this to confinement, is that confinement gives a way of
expressing that programs outside a particular subsystem are definitely
"truly unauthorized" for some set of rights.
In typical ACL systems, OTOH, there is no way to express this: any
program run by a user can transfer a right owned by that user to any
other subject. So the user is not in control of whether the authority
they grant can be "leaked" further; this control is shared among all
the programs that they run. The HRU model correctly treats these programs
as separate subjects that may be "unreliable" (for example because they
are confused by unexpected input), and also correctly distinguishes between
these unreliable subjects, and "truly unauthorized" subjects that the user
does not intend to have the authority. This maps directly onto the situations
that cause most real-world security problems -- for example, an e-mail client
should be considered unreliable, but should still be granted ability to read
a user's address book, whereas a Trojan Horse e-mail attachment would be
"truly unauthorized" to read the address book.
It is often taken as an unquestioned principle that when reasoning about
security, it is always "safe" or "conservative" to assume that programs
outside the TCB are hostile, even though they may not be in practice.
IMHO this is dangerously wrong, and is responsible for much confused thinking
about security models. In this case, it leads to an incorrect conclusion. If
we assume that all programs outside the TCB are maximally hostile, then there
is no point in limiting direct transfer of authority, because a hostile
program would bypass this by proxying the authority. If, OTOH, we make the
more realistic assumption that the programs are partially trusted, but may
have exploitable security flaws, then it becomes obvious that *how* they can
transfer the authority they are given is important. If they can only transfer
it by proxying, and a particular program in fact does not act as a proxy,
then it does not transfer authority. Although it is possible to
unintentionally act as a proxy as the result of a security flaw, that
appears to be much less common that other kinds of flaw which would be
prevented by confinement.
David Hopwood <david.nospam.hopwood at blueyonder.co.uk>
More information about the cap-talk