Communicating Conspirators

Chip Morningstar chip@communities.com
Mon, 15 Nov 1999 13:25:32 -0800 (PST)


This is a very interesting thread.

Ralph Hartley writes:
>If your security model can't deal with real locks and doors what good
>is it? Must we have another security model for securing non
>computational things? If so that model needs to include computational
>security as a proper subset.

To the extent that access to real world things is secured solely by keys and
locks, capabilities model the situation fairly well. The place where the model
breaks down a little is not that the real world restricts transferability (the
only thing stopping me from giving you the key to my office is my own
discretion) but the exclusivity of transfer when the thing transferred is a
physical object -- once I give you the key, I no longer have it (of course,
keys can be copied fairly easily, "Do Not Duplicate" stamps not
withstanding). This weekend I heard a rather disquieting presentation on just
how vulnerable existing key-based access systems used by many airports, big
companies and government agencies really are, and I'm talking here about the
administrative level, not lock picking -- all the problems are exactly
analagous to the problems one encounters with bad computer system security; in
many cases the problems actually *are* bad computer system security.

The other way things are physically secured in the real world leverage the need
for the entity passing a security barrier to physically pass it. For example,
the most secure facilities I have been in (the ones that actually were secure,
not simply the ones that had invested heavily in security infrastructure) had
human guards who were trained to know the faces of the people who were allowed
to enter. And indeed, the right to pass such a barrier is highly
non-transferable. How to model this in a computer system is a difficult and
interesting problem that I don't know the solution to (if there is one). ACLs
can't do it either, however.

>Even in the totally abstract electronic world there are things that
>cannot be described in computational terms. Consider property, in the
>old days land could not be sold. The king (Alice) would grant title to
>her vassal (Bob) who was NOT free to pass it on to anyone he liked
>(e.g. Mallet). He could give Mallet all the products of the land, and
>he could let Mallet boss the peasants around, but he could not
>transfer ownership. This was not a distinction without difference, it
>mattered a lot to Bob's heirs, and bob could always change his mind
>about cooperating with Mallet.

My reading of this scenario is: Bob has the power to convey nearly all of the
functional/useful attributes of ownership of the land to Mallet. The exceptions
are that he is not empowered to alienate his heirs, that he may reserve the
right to change his mind at a future date, and that he remains the "true" owner
of the land in the eyes of the king. I'm not clear what the practical reality
of the latter is, unless it translates to something that in the modern era we
would look upon as responsibility or liability. In any case, it seems to me
that all of these properties are easily accomodated by the capability paradigm:

  -- The case where Bob changes his mind later is dealt with by Bob passing
Mallet not the direct capability to the land but a revocable proxy. Mallet in
any case must assume that his is what Bob has done because it is not possible
for Bob to prove otherwise.

  -- The other cases are handled by seeing that this example privileges the
king as the source of title to the land. Thus when somebody wants to know "who
owns this land?" they do not ask Bob or Mallet but Alice, who responds that the
owner, as far as she is concerned is Bob. Alice, as title authority, is in the
same relationship to Bob, with respect to the land, as Bob is to Mallet. When
Bob dies, Alice revokes his capability to the land (and thus, indirectly,
Mallet's) and passes a new capability to Bob's heirs. There remains an
interesting case where Alice revokes Bob's title when common law would suggest
that she is not permitted to do so. In the real world this is handled either by
the courts (which might suggest yet another layer of ultimate title authority,
possibly diffused among multiple entities) or armed rebellion. I'm not sure to
what extent we wish to incorporate the latter into our security model, though
in fact physical attack is generally a relevant computer security
consideration.

>Suppose the power Alice gives Bob is the ability to communicate
>PRIVATELY with Alice. Bob can relay massages for Mallet, but the
>communication is no longer private; Bob could listen in. Adding
>encryption can't help because Alice would have to agree to add another
>layer of encryption to an already secure channel (which if she is
>interested in keeping Bob from transferring his power she will not
>do). Bob could give Mallet all the encryption keys but Alice will only
>accept communication routed through Bob (or broadcasted so bob can
>recieve it), and she uses symmetric encryption so that if Bob keeps
>the keys he can still read everything.

This is subtle. The power than Alice has given to Bob is the ability for Bob to
communicate privately with Alice, not for anybody to to do so. Bob is free to
pass this power to Mallet, but he can only pass the power that he actually
has. Thus what Mallet acquires is still the power for Bob to communicate
privately with Alice.  Communications that Alice receives over this channel
are, from Alice's perspective, still communications from Bob, even if they were
actually sent by Mallet.  Bob cannot pass to Mallet the power for Mallet to
communicate privately with Alice because he does not possess this power in the
first place.

Chip