[cap-talk] Caps v. Nyms (Jed's definition and SOX)

Jed Donnelley jed at nersc.gov
Mon May 10 22:10:30 EDT 2004

At 05:43 PM 5/6/2004, Karp, Alan wrote:
> > -----Original Message-----
> > From: cap-talk-bounces at mail.eros-os.org
> > [mailto:cap-talk-bounces at mail.eros-os.org] On Behalf Of Ian Grigg
> > Sent: Thursday, May 06, 2004 3:28 PM
> > To: General discussions concerning capability systems.
> > Subject: Re: [cap-talk] Caps v. Nyms (Jed's definition and SOX)
> >
> > Karp, Alan wrote:
>                                 (snip)
> > If the capability can be transferred,
> > then ... it can be transferred.  A
> > phishing attack is simply an attack
> > where someone bad asks you for a cap,
> > which presumably you'd hand over to
> > someone good.  This is not to disagree
> > with you, but to wonder where we are
> > going here - the original comment was
> > that there are "nyms for each service,"
> > and whether there are one or many seems
> > to be independent of how a phishing
> > attack would work.
>There are two separate attacks.  One is getting the user to transfer the 
>capability; I'm not addressing that one here.  The other is getting the 
>user to reveal some information, say over the telephone, that can be used 
>as a capability.

I'll chime in here supporting the above distinction at least to an 
extent.  While I don't think there's much danger that anything like a 
password capability would be shared by voice over a telephone, I do 
consider the threat of something like inadvertent sharing via something 
like email is a concern.  E.g. I am debugging my code and communicating 
with somebody who supports a library I am using.  I send the library 
support person something from a data structure: 0x ya da ya da.  Without 
thinking about it I included a buffer (variable, whatever) that includes a 
"password" capability.  If that data in that form suffices for the right, 
then it may be that I have inadvertently given away something 
important.  That was why I felt it important for our capabilities as data 
system (NLTSS, e.g.: http://www.webstart.com/jed/papers/Components/ ) so as 
to keep such memory divulging operations from inadvertently giving out 
resource access.  The public key mechanism discussed in:


was my best answer.  Note that such a mechanism works directly on a network 
(reference to the discussion in:

http://www.eros-os.org/pipermail/cap-talk/2004-May/001757.html )

>I worry when we require users to protect many secrets having different 
>sensitivities.  Capability designators and those tied to an identity in 
>some way only require a user to protect the secret representing 
>identity.  Password capabilities, in which knowing the bits is enough to 
>get the rights, requires keeping many secrets if you have fine grain control.

I agree that there's a legitimate concern there.  I think it important, but 
how it compares to that basic issue of being able to run programs with 
rights limited beyond just those of a "user" I'm not sure.

>                         (snip)
> >
> > What is a negative capability?
> >
>In rights amplification, the request is only honored if two capabilities 
>are presented.  A negative capability is one that is presented as part of 
>every request that cancels certain other capabilities.  There's a 
>description in
>       "Using Split Capabilities for Access Control", IEEE Software, vol. 20,
>         #1, pp 42-49, January (2003)
>         http://www.hpl.hp.com/techreports/2001/HPL-2001-164R1.html

Gulp.  I admit that the above notions seem somewhat obscure to me.   I fear 
that if we come to common ground including mechanisms of such complexity 
then our model will make it difficult to be used effectively.  I've been 
dealing with some of the issues raised in:


in a separate thread (e.g.:


but I'm afraid I haven't come to appreciate the need for the additional 
that they seem to require.

> > In a nymous system, once the private
> > key is lost, the attacker has the
> > ability to access the service.  Now,
> > if this is known about, then the
> > first owner can take steps to deal
> > with the issue.  In practice, we've
> > added freeze features - maybe an
> > example of the negative capability -
> > and there are also special cases of
> > other limitations.
>It sounds to me more like revocation than a negative capability.

Arg!  Am I unusual in feeling that feature creep is a problem in this area?
That is one place where I feel that thinking about things in a network
context provides some valuable grounding about what is "needed",
what is possible, etc.

>I don't understand why not being able to undo an oops is a good 
>thing.  Freezing a capability when a private key is lost is an example of 
>undoing an oops, isn't it?

I guess I have to ask - what's the difference between "freezing" a 
capability and revoking one?

>                                 (snip)
> >
> > Then I don't understand - initially you
> > suggested that many nyms made for a
> > protection issue, and now you are talking
> > about many capabilities with a fine-
> > grained control.
>Right, but the capabilities I'm talking about don't have to be treated as 

I take it that you are referring to the bits that go into making up the 
representation of a capability?

Let me see if I can make the above distinction clear as I see it.  In the 
DCCS mechanism
(distributed capabilities on a network built on classical capabilities):

the descriptor based "capabilities" to the same "right" look quite
different on different systems.  E.g. on one system it could appear
as a direct "requestor" capability (using the terminology of that paper)
to the server.  On a distributed system it looks like a "requestor" to
what is there called the "emulating process".

Along a somewhat similar line, in the Public Key Encryption protocol
for "capability" sharing on a network:


the representation in the memory space of every process looks different.
In the Server it shows up as a clear text "descriptor" that describes the
resource and the access rights (we thought it convenient this was
the way it worked out).  In any other process, Alice, it shows up as
the clear text form signed by the server and encrypted with the
public key of Alice (we also thought it convenient that Alice
could look at the clear text representation of the capability,
e.g. to determine its access rights without having to consult
the server).  The representation of the capability changes
as it moves from its internal form into a sending buffer and then
again when moving from a receiving buffer into another internal form.

>In a capabilities as designators system, you can find out my designator 
>for a capability, but it does you no good.  In a capability tied to an 
>identity, you can find out the bits I use, but they do you no good.  You 
>can always induce me to transfer the capability to you, but just reading 
>some data over my shoulder isn't giving you any additional rights.

If you can induce me to send you my capability then you have my capability.
I hope we can agree on that terminology.  If you can discover (even with
my inadvertent help) the internal representation of a capability that I
possess then I think it desirable that you not thereby obtain the right.
Capabilities as passwords have this property that seems to me

> > About the only way I can bring these two
> > together is your comment that Jed's caps/
> > descriptors can be linked in some sense to
> > a single key or identity.

Hmmm.  I seem to have lost the context for the above sentence.

> > Nyms can do that too (as I suspect they
> > are close or similar, this is good).  But,
> > doesn't this just make for a 2 factor or
> > single-sign-on scenario?  I.e., it is a
> > shift of the burden security problem?
>The only issue I have is with how many secrets the user must keep.  I 
>don't think there's a difference in authentication.

Hmmm.  Secrets...  In a capabilities as descriptors model one can argue 
that there
are no secrets being kept.  Everything is descriptor based (e.g. CCS and DCCS
and I think most of the classical capability systems).  Perhaps something may
be need to identify people (e.g. a password or finger print or ...), but 
that seems
quite a separable issue.

Once you get into a network situation then I believe (as per Managing Domains:
) some kind of identification information is required for rights
communication/management.  In principle one can use a network address
and then do management essentially by access lists (with computers
being the subjects as in DCCS or with processes being the subjects
as in http://www.webstart.com/jed/papers/Managing-Domains/#s10
and http://www.webstart.com/jed/papers/Managing-Domains/#s12
or probably in other ways, but I believe some sort of identity is
needed in any case.

I believe a public key mechanism like:
provides an effective way to communicate an identity safely
with messages and to share rights (capabilities) without
being subject to unwanted sharing via memory disclosure.
It also has the advantage (as I see it) of keeping the IPC
mechanism out of the capability manipulation business and
keeping the notion of capabilities (rights) an application level
protocol that anybody on the network can participate in.
This seems to me to reduce the overhead (in most places)
and make it more flexible.  The only question is the cost
of the encryption mechanisms.  With effective caching
(not repeating encryption/decryption operations for communicating
the same rights between the same processes) I believe the
overhead can be reduced, but I'm not sure if the costs can
be kept low enough for all capability uses.

--Jed http://www.nersc.gov/~jed/  

More information about the cap-talk mailing list