[cap-talk] Re: "capabilities" as data vs. as descriptors - OS
security discussion, restricted access processes, etc.
jed at nersc.gov
Thu Apr 29 22:20:59 EDT 2004
At 04:12 AM 4/27/2004, Jonathan S. Shapiro wrote:
>On Mon, 2004-04-26 at 23:18, Jed Donnelley wrote:
>> > 3. Stuff that is presumptively hostile. The right answer for this
>> > class of stuff is not to give it any capabilities you care
>> > about. This restriction can be supported by a reference monitor
>> > in a descriptor-based system, but not in a bit-based system.
>> Why not? In what you refer to as a "bit-based" system you can
>> keep from passing capabilities you care about to the program.
>Only if a cooperative program is doing the passing. If a hostile program
>wishes to pass those bits, it can obscure them.
Correct. If a hostile program is given a right (e.g. as a capability)
it can pass that right to any other program that it can communicate
with. This is unavoidable.
>While the proxy issue is real, the limitations on bandwidth that it
>imposes are actually useful.
I don't consider limitations on bandwidth useful. From my perspective
you seem to be focusing on a problem that I consider obscure.
Why not focus on a problem that we can all agree is not obscure and
cries out to be solved - namely the ability and mechanisms (including
user and application interfaces) to give running programs only the
rights they need to do the computing they are asked to do.
Once a program is given a right I consider it to have the ability to
make that right available to any other program. You may wish to
spend time trying to frustrate that ability, but I believe it should instead
be encouraged and made easy, clean, and simple to carry out.
>> > RPC proxy over covert channels cannot be beaten effectively in
>> > either design, but can be managed via the above techniques.
>> We agree about that. How about what you call RPC but over overt
>RPC over overt channels is permitted communication, so there is no
>security issue to consider.
>> Suppose my descriptor based capability system is running
>> on the Internet. Let's say it's my Web browser. It is given the right
>> to make network connections to addresses that it picks up in the
>> data it receives - essentially any address. How do I keep it from sharing
>> access to any local resources it is granted rights to?
>You can't. You gave it the right to do so.
We're on the same page there.
>> > > I'm not sure what you mean by the "safety problem".
>> >Harrison, Ruzzo, and Ullman 1974.
>I think it was Communications of the ACM. It's not on-line, to my
>> When I read (from:
>> "Harrison, Ruzzo and Ullman proved in 1976 that access list systems cannot
>> prevent a program from disclosing information to anyone in the system. This
>> means that users cannot trust programs with sensitive information unless
>> they can inspect the source code."
>Oops. I might have that year wrong. Yep. Sure enough. The BibTeX entry
>author="Michael A. Harrison and Walter L. Ruzzo and Jeffrey
> D. Ullman",
>title="Protection in Operating Systems",
>journal="Communications of the ACM",
>> I wonder what is going on. Why is an access list based system any
>> different from a capability based system (or a system like Unix or Windows)
>> in that regard?
>Briefly: because the rules that govern updates to the access graphs are
>not the same, and the rules in ACL systems don't work right.
Are we again talking about programs that are given a right (via an ACL)
being restricted from making that right available to other processes that
they are able to communicate with?
I know there was a huge amount of work put into that area. I consider
that work to all have been a waste of time.
We may disagree about that. However, perhaps we can agree that it is
important to at least have the ability to limit the rights that a program has
more tightly that to just give it all the rights that a human user has.
I believe that even doing just that much, whether with capabilities,
access lists, or even mechanisms like chroot or other Unix hacks,
1. Difficult enough just to get the underlying mechanisms into place, and
2. Very difficult (so difficult that it has never in my experience even been
seriously addressed) to get the interfaces into place (user and application)
that support such restricted access.
I believe that if we could succeed in doing that much with widely used
production operating systems we would have greatly (!) improved the
security situation. In particular we would have done what in my opinion
is essentially as much as one can do to address the classical Trojan
horse problem (embodied as things like executable email attachments,
macros, etc., etc.).
I believe that the most significant difficulty is not in the underlying
mechanisms to restrict access (though I believe the capability
mechanism has huge advantages in that regard), but more in the
interfaces. I say this (as noted in my other message on the parallel
thread) because we had a system with strong underlying mechanisms
for restricting access and, even in an environment where
security was important and where we had almost complete control
of the interfaces, we were unable to get the interfaces implemented
to support even minimal restriction of access rights for programs
run by users.
I believe, therefore, that anybody working on improving the security
of a system like Unix by adding mechanisms like access lists or
capabilities is deluding themselves if they think they are going to
accomplish anything substantive with that work without addressing
what I see as the more important interface issues.
>> I would
>> be interested to see such a "proof". I became quite cynical about efforts
>> to "prove" systems secure...
>I tend to agree, though I have never seen such a proof. What I have seen
>is proofs that a *model* of a system is secure. I have never seen these
>coupled to any formal correspondence argument between the model and any
Wasted effort in my opinion - though far be it for me to restrict how
academics spend their time.
>However, proofs that a model *cannot* be secure are another matter. A
>broken implementation of a broken model can clearly make things worse,
>but it is very useful to know that a perfect implementation still won't
>work. This is the variety of proof considered in the HRU paper.
I find it difficult to imagine how it could be proven that all Access List
systems (in fact almost any access list system) are so insecure as to
not be able to allow a process with an access right (limited by ACL) to
choose to either share the right or not share the right with other
processes on the system. If that is in fact what you believe the:
author="Michael A. Harrison and Walter L. Ruzzo and Jeffrey
title="Protection in Operating Systems",
journal="Communications of the ACM",
paper to show, then perhaps it's worth my time to look it up in the library
and go through it. I strongly suspect there is some misunderstanding about
assumptions here, but if that's the only way to get past them it might be
an amusing exercise.
>> >Up through L4x2, an L4 process (they would say thread) wishing to send a
>> >message simply presented the thread-id (an integer) of the destination
>> >process. Since the thread-id is just bits, we can conceptually declare
>> >that it is a data capability.
>> Hmmm. In what sense? Does just knowing the identity of a process
>> confer some rights to it?
>Yes, in the same sense that knowing the data bits of a capability convey
>rights to it.
Hmmm. Perhaps we should clarify the above point a bit. The notion of
"capabilities as data" still includes quite a variety of potential implementations,
e.g. as discussed in:
All include both information in the server of the resource as well
as information from the process requesting access. One extreme
is essentially an access list mechanism where the server remembers
which processes (this assumes that it can know where requests
come from) have the rights to which resources.
The capabilities as data" model that I generally have in mind when
in a discussion such as this is something like the "Control by Public
Key Encryption" mechanism described in:
I suggest taking a look at that section. It is on-line and is only
a few paragraphs along with a diagram (if the notation becomes
confusing you may need to refer to the introduction where there
is a note on font restrictions).
With that model the representation of a capability is unique to
the process that holds it. Capabilities are transformed whenever
they are communicated. Even if a capabilities representation in
a processes memory space is "leaked" (e.g. viewed in a dump)
such a representation is useless outside that process. Similarly,
by cryptographic means, nearly any modification to a capabilities
representation will render it invalid (though we did work some on
the ability for a process to essentially "sign" a rights reduction
for a capability to avoid the cost of having to send it back to the
server to have its, let's say, access rights restricted - e.g. to
make turn an RW file capability into R-only).
>Ignore the issue of forged authority for a moment. It
>wasn't germane to the issue I was trying to raise.
>> >Because the L4 kernel has no way to validate that assertion of a given
>> >thread-id is legal, the kernel has no way to do authorization checks --
>> >the recipient is forced to filter requests at the computational expense
>> >of the recipient. Note that the burden falls on the wrong party.
>> >Starting in L4x3, we have introduced descriptors for IPC. This doesn't
>> >eliminate the possibility of covert RPC, but it DOES eliminate certain
>> >denial of resource attacks.
>> Perhaps you could give me an example of a denial of resource attack that
>> such descriptors would prevent...
>Process A sends an infinite stream to process B, all of which are
>unauthorized. Because the communication layer cannot prevent this, the
>recipient must act on these messages sufficiently to discard them.
There again from my perspective you are addressing issues that are
down in the noise. On today's Internet any process can send such an
infinite stream (effectively) to any server on the network. That may
be a "problem" in some limited sense. I certainly don't see it as
a security problem per se. It's way down on my list of security issues
to deal with.
>This is in contrast to a covert channel, wherein both parties must
>intend to cooperate.
Most of the cover channel mechanisms that I've seen involve
modifying the consumption of shared resources. A process just
going into a tight loop can comparably adversely effect a system.
Such a comparable problem I also consider outside the scope of
this security discussion.
>The resource consumption of independent processes
>is not altered.
>> When I think of communication (you
>> refer to RPC) I think of the Internet. At some level I have the ability to
>> send/receive messages to and from some sort of network address (data,
>> yes, but not a capability except in possibly a nearly vanishingly limited
>Yes. I understand that you prefer to reason about security by
>predicating your assumptions on a fundamentally broken infrastructure. I
>just don't think that it is possible to make progress in this way.
>> Do you see the problem I'm trying to deal with (rights communication
>> on networks with principle of least access protections) as something
>> you are trying to deal with in your work?
>Not at this time. I'll start thinking seriously about the distributed
>case when I'm satisfied that the simpler (local) case can actually be
>handled. Until then, thinking about the internet is just a waste of
>I do think that in the internet context, where message transmission is
>unrestricted, we cannot prevent denial of service.
I don't believe there is any sense in which a realistic service can
be protected from resource consuming denial of service attacks.
From my perspective efforts in that direction are even less helpful
than working to prevent covert channels or to "prove" operating
systems correct (essentially that a model in one language <some
programming language> is faithful to a model in another language <a
meta language about security>).
I believe the network environment is where the security problems relevant
to the present and future lie and that any effort to restrict the domain of
discourse to what you refer to as the "simpler (local)" case are no more
helpful than securing a system by disconnecting it from the network.
So ... it would seem time to pick up on another sub thread or let my
initial thrust lapse.
Thanks for taking time to correspond on this Jonathan. It's been interesting
reviewing some of this thinking, particularly in light of all the networking
developments that have happened since 1979 when we had our "great
debate" (capabilities as descriptors vs. capabilities as data).
I guess the one dangling part of this discussion is my reviewing the proof
that "access list systems cannot prevent a program from disclosing information
to anyone in the system". If there is any substance to that assertion then
it would seem that any effort to enhance the security of existing systems
by adding access list protections is wasted effort. Since there are some
such efforts underway it would seem to be wise to insure that whatever
was proved won't preclude those efforts from being effective.
I presume you have read that paper. Do you think what it proves
might actually suggest that such ACL efforts can't be effective
at what they intend (not stopping a process from intentionally
sharing a right, but certainly allowing such processes to limit
such rights communicated to other processes)? If that's the case
then it would seem worth the effort of finding in some archives.
Perhaps I should even scan it - shining on as I sometimes do
to old copyright issues with limited distribution of such largely
historical papers. It would be great to get a blanket copyright
release from ACM and IEEE for posting such old papers on the
Internet with appropriate credits. Oh well, one windmill at a time.
More information about the cap-talk