[cap-talk] Re: "capabilities", reading about "split
capabilities", review, KeyKOS?
jed at nersc.gov
Fri May 7 21:53:15 EDT 2004
At 05:11 PM 5/6/2004, David Hopwood wrote:
>Ian Grigg wrote:
>>David Hopwood wrote:
>>>I have to ask: why is another definition of capabilities needed?
>>>Aren't the definitions in
>>> - Paradigm Regained <http://www.erights.org/talks/asian03/index.html>,
>>That paper seems to define *a* model of object
>>capabilities. For capabilities itself, it seems
>>to refer to DVH.
>There's no essential difference between object capabilities and what
>people usually mean by "capabilities" in general.
>(Split capabilities are different, and Posix capabilities are related
>only in name.)
Since I hope to get consensus on a shared access right communication
protocol and I hope that to be "capabilities" (as generalized as possible,
but the unit of right communication, unfettered by confinement, with
as low overhead and good protection as practical) I hope to
b. Ban "Posix capabilities" from discussion/relevance in so far as
possible as being disruptive/confusing.
But for a. I hadn't seriously looked at "split capabilities" before. Again
following my effort to reduce what I see as side threads, let me ask
a few questions about split capabilities. For example, from:
when it says (pg. 3):
Traditional capabilities, such as those shown in Alice's capability list
above, have two
problems. It is hard to revoke a capability [1, page 149] because the
system has no
control over the passing of capabilities from one user to another. The
capabilities in a system is also a problem because a capability is needed
separately controlled access right on each resource. This means that each
time a user
joins the system, many thousands of capabilities need to be issued; each
time a user
leaves, they must be revoked. Combined, these two problems present a
Of course I'm not sure just what is meant by "traditional capabilities". As we
have seen there seems to be a fairly wide variety of opinion on that. However,
I'll take the capability concept in as broad a view as possible (a communicable
token representing an access right supported by a service process). With
that broad view I don't see either of the above as issues. As I noted,
can be handled easily with a small amount of book keeping in the server
("fork"). While it is true that a capability is needed for each separately
access right on each resource, where is the problem with that? It isn't like
we're going to run out of bits. I'm afraid I don't understand why it would be
imagined that "This means that each time a user joins the system, many
thousands of capabilities need to be issued; each time a user
leaves, they must be revoked." I am unaware of capability systems in
which the above is true.
However, I like the footnote on page 3 of the above paper (referring to the
statement, "Scalability is enhanced because capabilities are easily
copied (but not forged, of course), so one user can pass the capability
to others who are trusted.":
#3 "At first glance, this feature appears to defeat security policy. Carol
Alice and passes her the capability to read Bob's file. What if we don't want
Alice to read Bob's file? We gain no real security by trying to prevent such
sharing. Carol can always act as Alice's agent, forwarding her requests to
sending Alice the results. As far as Bob is concerned, Bob is making the
It's best not to try to enforce the unenforceable."
I couldn't agree more. Sounds like another vote against trying to enforce
confinement - this time with rights communication. It's best of course
mainly in the sense that by trying to do such enforcement you stand in
the way of legitimate POLA access control.
Maybe I better stop there. If I don't understand the premise of "split
(one for the resource and one for the rights), perhaps it isn't worth
until I do. Otherwise the splitting would only seem to complicate
is otherwise simple.
>>> - the "Ode" <http://www.erights.org/elib/capability/ode/index.html>,
>>Can you point to the definition of capabilities
>>in that paper?
>There are several (basically equivalent), but the one I prefer is the
>section "Patterns of Cooperation Without Vulnerability" in
>>The impression I get from reading
>>that paper is that anyone who understands what
>>capabilities are will understand very well what
>>the paper is talking about. But, to someone
>>coming in from the cold, there is a feeling of
>>too much inner knowledge needed.
>>> - or on the C2 wiki <http://c2.com/cgi/wiki?CapabilitySecurityModel>,
>>All I could see there was:
>> "A capability is similar to an object reference in
>> ObjectOrientedProgramming, an actor name (or mailbox)
>> in the ActorsModel, or a closure in the LambdaCalculus
>> (with local state), provided that any deviations from
>> pure object, actor, or lambda calculus computation
>> are prohibited."
>>That's not a definition, that's a reference to
>I suppose I consider it more useful to explain how capabilities relate to
>other concepts than to imply that they are something new. This description
>obviously assumes that the reader knows something about at least one of OO
>programming, the actor model, or the lambda calculus.
I believe that the term "capability" is best understood in its most
abstract sense as a token representing the right to access a
resource. That is, possessing a capability means that a
process has the right to request something of the server of
the capability that the process would otherwise not be allowed
If you go too much beyond that it seems to me you start getting
into implementation aspects. However, I believe it is at least
important that there be some means of communication capabilities
so that if Alice (a process) has a capability (e.g. serviced by Carol)
and Alice wishes to ask something of Bob that would require the
resource access, then in Alice's request message to Bob Alice
should be able to send the resource right to Bob (the "inalienable
right as in #3 above).
That generalized model is what is addressed in:
(though I hadn't met Alice, Carol, or Bob back then ;-).
From my viewpoint the "password" model for capabilities
(e.g. encrypting the access rights into a "capability" data
block that can be passed around) ala Amoeba and NLTSS
is perfectly fine, except that it means that such rights
are visible in memory dumps. While one can argue (e.g.
as Tannenbaum did) that memory dumps should be protected
because of other sensitive data that might be in them,
capabilities (at relatively permanent capabilities) to me represent
another higher level of sensitivity to data that I believe requires
further protection. The case that I consider if that of taking
a dump from a scientific code run to a consultant for help.
Are you going to allow the consultant to copy the capabilities
in the dump and thereby gain access to the resources?
I didn't think that was acceptable. That's why I proposed the
public key protected capabilities:
One thing to note about that mechanism, it is of course not
necessary to do the sending and receiving transformations when
capabilities are re communicated to/from the same processes.
The concept of "open" with that model could be considered to
be the transformations required to do the "Receive" operation
from wherever the capability is received, plus the transformation
needed to send the capability to the Server. The Server does
have to do the Receive operation once when the first request
from a process comes in, but after that caches make any
transformations for direct resource access unnecessary.
Only if Alice wants to pass the capability to Bob must
Alice again to a Send transformation.
>In any case, before that there is
># A security model (CategorySecurityModel) in which all resources are
># referenced by "capabilities" that both designate the resource, and
># authorize access to it.
>which is about as concise a definition as you're likely to find.
Yep. Implicit I believe in the above is the right to communicate
>> > This is not meant as a criticism: it would be really useful to know why
>> > "the capabilities people (them) and the nym people (us) haven't really
>> > seen eye to eye on the lucidity of each other's documentation."
>>I can't get much of a picture reading the above
>>papers. I can't sink my teeth into the words
>>that come out. I can't sit down and build it.
>>(I've actually read them a few times each, I
>>Jed's definition was clear, simple and something
>>that I know that your average programmer could
>>deal with. Those papers mentioned above are for
>>academics who are prepared to start at DVH and
>>then read every paper thereafter 3 times. I'm
>>stuck in the world of average programmers,
I have to admit, while I've worked with the capability concept
for many, many years (since 1973), I feel that one of the
problems is that people seem to keep coming up with so
many tangents by trying to solve what seem to me to be
non-problems (e.g. "split capabilities" above - though I'll
await clarification of any issues I may have misunderstood)
that it has made the capability concept seem and actually
be (in implementations) much more complex and therefore
less practical than it need be.
>Well, there are also introductory articles like:
I've of course read this intro several times - even during the
course of this discussion. Incidentally ;-) I no longer see
the Java demo flame... I think that intro is fine as far as it
goes, e.g. when it is at the point "Capabilities are simple and
familiar. You use them every day, and they don't surprise you
very often. If you think about ordinary keys and the sorts of
access controls they provide you will not go far wrong."
I might take issue a bit with the emphasis on people early
on (while capabilities are suitable for controlling access
by people via their surrogate processes, their full power
isn't apparent I believe until they are used to control access
by computer process).
However, when it gets to the point of starting to become
implementation specific (as it says at the beginning
regarding "partisan"), e.g. under Capability-Based
"In most capability systems, a program can hold an infinite
number of capabilities. Such systems have tended to be
slow. A better design allows each program to hold a fixed
(and small -- like 16 or 32) number of capabilities, and provides
a means for storing additional capabilities if they are needed."
I beg your pardon? Why slow? Why limit the capabilities to
a fixed small number? It is only if you start thinking of a specific
descriptor based implementation that such considerations would
come to mind. And they come with all sorts of baggage - e.g.
specifying and communicating the "few", what to do when you
need to share more (setting up and sharing directories), etc., etc.
Also, while I think what is referred to there as "universal persistence"
is a wonderful thing (a property of our NLTSS system), I see no
particular relevance for capabilities. I think its perfectly fine to
store capabilities in files or directories.
I think perhaps the password program is a bit unfortunate as an
example. While the password program can only get access to
the password (actually shadow I guess, but we digress) file,
that file is so crucial to the security of the system (e.g. any
password could be written in for any user, e.g. root) that if it
is Trojaned or otherwise hacked, it is very bad news. I think a
better example would be the editor (e.g. vim) that we have often
used in this discussion. While a system administrator may need
to use such an editor, there is no reason to give it access to all
the files and other resource that root has access to.
Generally I think the examples on that page are not the best.
In both the Access Restriction and Collaboration sections it
seems to suggest that solving the confinement problem is
what is necessary to make it possible to restrict access or
to collaborate. It most emphatically is not.
I also believe the Selective revocation section is a bit
muddled. Particularly in a system that emphasizes persistence,
I would say that it doesn't suffice to remove a user's access to
entirely by deleting their login. They can simply have a persistent
process running with all their rights that makes them available
through some communication path. Perhaps there is more to it
than this. However, again in this section confinement is invoked
as a solution for a problem when it both doesn't solve the problem
(in my view) and other mechanisms (e.g. forking or cloning as
I saw somebody else refer to it) more faithful to the capability
On to the above. I hadn't read that one before. Let me make a few
comments. Firstly, I think the comment about Multics is grossly out
of place. I don't believe that Multics was in any sense "virtually
invulnerable to hacking and cracking". Multics used essentially the
same broken model that if run as you I get all your rights that
nearly all systems since have used. Of course Multics had an
elaborate layered architecture internally, but I don't believe that
helped much to solve today's problems (e.g. viruses and worms).
It certainly didn't use capabilities to control access. The "good old
days" weren't that good ;-)
As far as KeyKOS goes, I don't really know because I never ran
on it. I'd be interested to hear how KeyKOS dealt with the interface
issues that we've discussed. Certainly any capability system along
the lines of what we've been speaking about in this discussion
has the ability to restrict processes to POLA. However, whether
they actually do (e.g. when I sit at a terminal and run a program,
how are its rights set?) is another question. I know that for our
capability based system, NLTSS, it was fully capable of handing
POLA restrictions, but in practice the command line interpreter
just gave out all a user's rights to any program run. We were
compelled to adapt to existing user interfaces. End of
POLA. That's one reason why I started this discussion with
an emphasis on the interfaces (user and API) where I feel
work needs to be done. I'm happy to hear that some work has
indeed been done in this area. Now it seems to me we have to
face the difficult task of moving the more secure research world
towards the work-a-day world (e.g. Windows and Unix) and visa
I'd be interested to hear what KeyKOS did in that regard (user interface).
As to: "Suppose you were running a capability-secure operation system,
or that your mail system was written in a capability-secure programming
In either case, each time an executable program in your email executed, each
time it needed a capability, you the user would be asked whether to grant that
capability or not."
In the former case (a capability-secure operating system) of course the above
isn't necessarily true. It could be true, but it depends on the user
and the APIs. If I do a Unix "find" or a Windows search (on all my files), am
I going to be asked about each file? Tedious. I believe the real user
issues and effective compartmentalization to realize a POLA system have never
been fully realized - even though we have had many systems that internally were
capable of such access restrictions.
As to a "capability-secure" programming language ... Hmmm. I've never trusted
security enforced by programming languages. I'd be interested to understand if
I have any reason to change that opinion. Whatever, I'm not sure the example
is relevant here as it is clearly not the programming language that obtains the
initial rights (e.g. to too much - namely all the users rights).
Here again confinement shows up. I don't particularly object to communication
restrictions (e.g. local firewalls, perhaps like Zone Alarm that has the
pop-up query referred to), but in many cases I run services that do
need the ability to communicate (e.g. with other services) to accomplish what
I want them to do (e.g. in this case perhaps to download graphics). I
people in general would be so inclined to reject a request to communicate. I'm
not even sure doing so is generally a good idea. POLA yes, communication
limitations (e.g. to subdivide work) no.
More information about the cap-talk