[cap-talk] Capability definition - What do we want?
iang at systemics.com
Wed Mar 16 11:08:41 EST 2005
Jed at Webstart wrote:
>> Are rights capabilities? Are capabilities rights?
> I believe that capabilities are a computer implementation
> of rights. That's how I define them. If there are other
> definitions then it seems to me that this list is quite a
> good place to work them out!
Hmm! I am encouraged yet MarkM suggests that ACLs are
rights (systems) and they are not capabilities. Toby
says a capability should refer to an object and thus
a cap would not be an unbounded class of rights.
So perhaps caps are rights implemented with pointers
(or references) to their objects of rightful access,
rather than user names, rights names, and passwords?
It's certainly a very interesting question, and it is
in conflicts like this that we determine the nature of
>> ( Did you come across a description of what a sealer/unsealer is?
>> This discussion is the first I've heard the term. )
> Oh sure, there are lots of references to that technology, just
> Google around. For example, look at the section titled
> "Rights Amplification" at:
....sealer/unsealer pairs [Morris73, Miller87, Tribble95 Appendix D, Rees96].
E primitively provides sealer/unsealer pairs. The money example below builds
sibling communication from sealer/unsealer pairs.
Sealer/unsealer pairs are similar in concept to public/private key pairs.
The sealer is like an encryption key, and the unsealer like a decryption key.
The provided primitive, makeBrandPair, makes and returns such a pair. When
the sealer is asked to seal an object it returns an envelope which can only
be unsealed by the corresponding unsealer.
Thanks! That explains it.
> In some ways the Public Key mechanism that I described at:
> I think could be referred to as a sealer, resealer, unsealer
> technology. We seem to have a plethora of technologies.
> What I think we don't have is a simple mandate to take a
> common human understanding of the notion of "right" and
> make a standardized manifestation of it that can be used on
> computers and across computer networks.
I think I agree.
>> 3 comments:
>> 1. Use of big random numbers to achieve some property would
> OK, we may be toying with fine aspects of definitions.
They may well be fine definitions in the abstract and
rarified air of this environment; but I feel issues
like this become much more concrete to the builder of
the application, who hasn't the time needed dwelve
into all the theoretical nuances of how things could
be. What he needs is as much solidity as possible in
terms of frameworks which help to reduce complexity and
ease his passage along to a result.
> I should have limited my example to the "Access list"
> mechanism. There is nothing that looks anything like
> encryption in that mechanism. In some ways it's closer
> in flavor to the local descriptor based capability implementations.
Access lists are protected (often) by passwords. That
falls under crypto, not so much because it uses crypto,
but knowing when to use crypto and when one can get
away with it is the cryptoplumber's specialty.
Similarly, I agree with Ben that Alan's case of the 'internal
cable' not needing crypto is a decision that falls within
that general box or layer of crypto. We may choose to use
an empty box or layer there, but the academic capital that
we are drawing from to leave that box empty is the science
and art of building crypto boxes.
>> 2. use of random numbers to manage rights/caps still leaves
>> the issue of communicating and sharing that info. This
>> would seem to be a shared secret operation, which is a
>> well understood primitive in cryptography.
> If it's strictly a Swiss Number there's no need for any
> cryptographic technology for the communication - unless
> you are considering the issue of keeping the communication
> private. I consider that a separate issue - though some
> communicated information is more important to keep
> private than others.
It's quite reasonable to say that the crypto layer is
based on local generation of secret numbers, and a
manual and external issue is a key exchange. My point
being that it is the crypto thought process that allows
you to get away with not using any formulas to do this.
> For a mechanism like the Public Key control of capabilities:
> there is no need to keep the communication private - though
> there might be a desire to do so for other reasons. With that
> mechanism the representations of the capabilities in processes
> buffers, both sending and receiving, can be exposed (e.g. in
> a process memory dump) and the rights are still protected.
OK. I quickly read that, and it seemed about what I'd
expect if the goal was to do caps with public keys.
(As always suffering limited time to dive deeply into
designs....) It would seem to me that one is naturally
drawn to public keys as the way to do caps, but E does
it with SNs; was that a pragmatic decision or was
there some other non-complexity reason behind it?
>> 3. finally, as a sort of minor nit, to a business person,
>> the term 'swiss number' stinks. I cringe every time I hear
>> it. I can understand that a business person would want
>> to shy away from being associated with that, as the need
>> to explain to a boardroom that the system's security
>> depends on 'swiss numbers' is a killer.
> Sorry - I only picked it up because others were using
> it. It isn't mentioned in my former publications that
> discuss what I refer to as "password" capabilities that
> are based on large "un guessable" numbers.
( I think I might have been in free-flow mode, not
taking any particular attention. I'm happier now
that I've written 'SN' above. )
>>> Then concerning:
>>> I don't understand the above. From my perspective a "capability" is a
>>> means of rights communication. E.g. as with the oft used "key"
>>> metaphor, one can give a "key" to another and thereby communicate
>>> a right...
>> Wow! I wish someone had told me that many years ago,
>> that would have easily solved the question of "what is
>> a capability!"
>> I'm stunned it is that easy. Is it?
> It is for me. I want it to be for others. That simple notion of a
> right is so easy for people to understand and the need to be
> able to communicate such rights seems to me so clear that
> I wonder why we're still so - I can't think of a better term than
> "balkanized", perhaps fragmented, and can't seem to get it
> together to come up with a network standard and implementation
> that ordinary, non-technical, people can use for rights communication.
> A mechanism that extends down into our programming and
> digital communication world, but that will be relatively easy for
> everyone to understand - without, in my opinion, mucking
> it up with niceties like "amplification" and certainly not
> confusing it with the various technologies like encryption
> and descriptors, etc. that we may use to implement such
My personal goals - unfullfilled so far - are firstly
to come up with a hard definition of capabilities,
where hard to me says "I can build it," and secondly
to measure SOX against it. (A moment's reflection
should see that they are the same goal from different
But, yes, I agree. I see the balkanisation. I also
see an awful lot of inside knowledge which means that
complex things like capabilities (which should be
simple) do not survive, yet simple ideas like petnames
are being absorbed in the wider world slowly by their
very ease of description, even with a lack of a good
paper or document describing what they are (I know of
the PNML one, Marc's one and Tyler's pages and so
far I'd vote on Marc's document as the best intro,
but it is too long for an exec summary).
I also use caps as a mirror on my own work, and when
I detect difficulties like lack of doco or lack of
integration with other ideas, I use that to redouble
my own efforts.
I fear I ramble at this point; don't bother to reply!
> For me the notion of what I refer to as "network discipline" is
> so important to capability communication that I've come to
> see what Alan and Shap refer to as "islands" of local capabilities
> more as obstacles than as providing help.
I understand Alan and Shap's viewpoint here on
complexity and "getting the simple case right"
but I think I prefer the network view you wrote
about later on in your post (I snipped it because
I agreed!). The value of a good networked application
is so far in excess of a good local application
that I can't see the point in not assuming
the network from the beginning. But that's
>> Mind you, it does kinda leave the question of "what is
>> a right" dangling, but we have a big body of legal and
>> human knowledge there. As long as we can accept that
>> a right is more or less the analogue of human rights,
>> encapsulated in tech/software, then I'm happy.
> I can go further with what a "right" is in computer terms.
> If you make a request of a server (send it a message) and
> correctly proffer a "right" serviced by that server, the server
> will do something for you that it otherwise wouldn't. That's
> all a right/privilege is. A right is something that gets you a
> service that you otherwise wouldn't get. With the oft used "key"
> metaphor the right to drive the car or enter the building.
> Some binding of a credit card and pin, etc. becomes a "right"
> to access your credit. Being able to read or write to a file
> is a "right" - a pretty generic one that I find quite a useful,
> almost prototypical, example.
OK. And that's why Mark Miller suggested the 3rd
layer after crypto and software be called Rights.
>>> "Capabilities are enhanced by and often replaceable by cryptography."
>>> I would instead say that cryptographic technology can sometimes be
>>> used to enhance capability communication. For example, even in a simple
>>> "password" or "swiss number" capability system one can choose to
>>> encrypt the specific rights that the capability conveys as essentially
>>> a hash into the capability and avoid the need to store an indexed
>>> hash to find the rights. This seems a technical trade-off to me.
>> If seen in the light of (slight plug) the
>> FC 7 layer model,
>> 3. Rights
>> 2. Software Engineering
>> 1. Crypto
>> then Crypto and software engineering techniqes are
>> simply tools to create systems of Rights. Assuming
>> caps==rights, then the model holds.
>> (Ref: http://iang.org/papers/fc7.html )
> Works for me. Capabilities are the embodiment of
> rights. It's communicating capabilities between
> processes (e.g. across a network) that allows them
> to share rights/privileges.
My current view of the Rights layer is that it has
these potential champions in it:
* nymous patterns built on public key (SOX)
* nyms built simply without crypto but with passwords
* Identity systems as per old world
* Identity systems as per Stefan Brands
>>> 2. ----> .... I consider any "rights-
>>> transferring" mechanism a capability communication mechanism.
>>> That is, I consider a "capability", by definition, the embodiment of a
>>> computer/digital "right" and any means to "transfer" such a right as
>>> a capability communication mechanism.
>> I'm stunned. I await the vote on this claim!
> Hmmm. Claim? I guess I consider it more of a definition. However,
> I do consider it absolutely fundamental to any sort of discussion about
> "capabilities" - e.g. on a list like cap-talk. If we can't agree on a
> and understandable model of what abstraction we want to support, then
> I'm afraid we're doomed to wallow in our technology and never really
> be able to make that technology "transformative" for the general
> computer/network using population.
>>> Is there some disagreement or confusion about terminology here
>>> that I'm missing??
>> Just amazement that it took me about half a decade
>> to discover this simple definition of what a capability
>> is. It means we've all been building capabilities all
> By my definition. Others may argue that the "capability" term
> is more narrow - e.g. encompassing only descriptor based
> capabilities. Of course then the notion of "network" capabilities
> (e.g. "password/BRN" or public key or ...) makes no sense.
> Perhaps we can put this notion of the definition of the term "capability"
> to a vote/discussion.
> I propose we agree to define the term "capability" to be the
> embodiment of a "right" or privilege - in the sense of access "right"s.
> A computer/network "right" conveys a privilege to a person or process
> that the person or process wouldn't have without the "right" in its
> capability manifestation/implementation.
So you are including ACLs and the like in your definition?
In that "whatever allows this right to be expressed and
and trasnferred" is the key, rather than how it is done?
> I think this definition is different that what's referred to as the
> "ObjectCapability Model"
> as here: http://c2.com/cgi/wiki?ObjectCapabilityModel
> It seems closer to what is referred to in that Wiki as the "Capability
> Security Model":
> but even there it appears in the context of a lot of other technologies
> that seem to me more aspects of implementations.
> Is there any hope of generalizing the "capability" notion as a
> digital communication right embodiment?
That's the question. As Mark points out, we already have
a win in that the objective or goal of capabilities is
that of a digital communication right embodiment, and the
discussion then focuses on whether capabilities is all
such beasts, or just one example, leaving the way to say
that ACLs are also in that space of meeting that objective,
but are not capabilities.
>>> By adding that "and digital signature mechanisms" to me it suggests
>>> another area of research/development. Digital signatures can
>>> indeed be useful - perhaps for capability communication and
>>> perhaps for other mechanisms - e.g. data communication.
>> I agree .. just another way of doing rights. Er, caps.
> Hey, there you go!
>>> ...I made an effort in:
>> That I like.
> 1981 - FYI.
Ah, when I was learning the rights revocation aspect of the
>> The problem that Nick is examining here with such
>> software engineering techniques is that they are
>> essential to the construction of a viable rights
>> system. That is, unless you can guaruntee that
>> the right is forever, you haven't got a right.
>> This drags in all sorts of reliability issues,
>> to which stuff like Byzantine fault tolerance
> Perhaps. I'm sympathetic to the notion of rights
> lasting a long time. Forever is indeed a long time.
> I don't know of any computing system that has
> lasted more than 30 years or so. Probably my
> longest lasting computer "rights" were capabilities
> on the "Elephant" storage system at Livermore
> Lab. That system went through many technical
> generations (e.g. drum storage, the IBM "data
> cell": http://www.columbia.edu/acis/history/datacell.html
> the IBM "photo store":
> and on to other tape cartridges) while still maintaining
> the same "rights" to old files and directories.
> I consider that an enviable record. I believe having rights
> (and their capability embodiment) outlive the computer
> systems (esp. the "server"s) that support them is a worthy
Exactly. It seems that if we as builders are to
get serious about rights, we have to have an implicit
goal of "forever." This is one issue that dogs the
payment system world, in that even years after the
system has shifted on to higher things, older rights
have to be maintained.
As an example, I recently shutdown the old DigiGold
server, just a couple of months back. For those
unfamiliar, this is a long running saga that goes
back to 1998. Even after the court in 2001 accepted
that it was possible to block access, I still kept the
server and the inherent rights intact, just blocked.
And, even now, those rights are still in existence,
just not running.
(Aggressively seeking to limit the length of this post...)
News and views on what matters in finance+crypto:
More information about the cap-talk