[cap-talk] CapDesk demo, capability demos in general

James A. Donald jamesd at echeque.com
Sun Oct 7 22:18:42 EDT 2007


Jonathan S. Shapiro wrote:
 > I agree with your statement about permissions held by
 > the installer, but I see no difficulty of
 > implementation if this is achieved by designing the
 > installer to hold initial capabilities that are not
 > accessible to the administrator's account. Conversely,
 > I see a number of difficulties if such a scheme is
 > implemented by ACLs, because administrators generally
 > *are* in a position to manipulate the content of ACLs
 > in general.
 >
 > Can you articulate what it is about ACLs that you
 > believe makes them well-suited to this problem? Also,
 > can you articulate how my concern about the use of
 > ACLs for this application can be defeated?

You are comparing abstract capabilities with concrete
and particular ACLs - comparing the general approach of
using capabilities with the particular way ACLs have
been implemented in a particular case.

Since I am trying to envisage how systems should be
written, rather than how to make use of the way existing
systems, I would like to compare approach with approach.

Let us use the metaphor that capabilities are keys, and
ACLs are membership lists that are checked against IDs.
Now if there are some activities that we only want one
particular piece of software to perform, and no other,
then the restriction functions as an ID, rather than a
key.  One can always use a key as an ID, just as you
suggest.  If the trusted software is trusted not to pass
the key around, no problem - the key reliably identifies
the software.  Yet a key is *not* an ID because it *can*
be passed around.  It would not be good practice to
assume that whosoever can open Joe Bloggs locker is Joe
Bloggs, but it would be good practice to change the lock
on Joe Bloggs locker and issue the key to someone
carrying an ID card that says "Joe Bloggs".

The more capabilities are used to protect and manage
permissions that are large, important, and durable, then
the more it becomes a problem to protect capabilities,
the more capabilities are a vulnerability, rather than a
protection.

Protection is a matter of degree, and nothing is
absolutely secure.  The more something has to be kept
secure, the greater the overhead, risks, and complexity
in doing so.  If I don't want X to be passed around, I
don't want it to have the characteristics of something
designed to be passed around.

 > your two statements seem to imply that a two-mechanism
 > system might be worthwhile.

I believe so, but of course, multiplying mechanisms also
multiplies complexity and creates more failure modes, so
there is a case for having a single mechanism to do
everything, and if so, that mechanism would have to be
protected capabilities.

But I have been spending some time trying to think how
to make protected capabilities function so that one
single kind of protected capabilities addresses all
problems, and such solutions seem painful to me.  I
don't think they scale over networks where many people
whose interests may conflict have control over different
parts of the network - Seems to me we do need multiple
mechanisms to deal with multiple problem cases.
"Protected capabilities" means that you can trust the
operating system and the hardware, but cannot trust the
non OS software.  Unfortunately, real life systems are
networks with multiple operating systems installed by
people with conflicting interests.  We do not have such
a clean binary black and white break, and a solution
that addresses one such conflict of interest does not
necessarily address another.

Used to be we typically had many users on a single
system with very little software, and the problem was
protecting the users from each other.  Back in those
days we *could* trust the hardware and the software, but
not necessarily the user.

Now the user typically owns the hardware and software,
and the problem of trust is reversed.  Can the user
trust all of the software?  No he cannot, and he cannot
trust all of the OS software either, for example
drivers.  Pretty soon, as people own more and more
hardware, we will be seeing trojaned hardware also -
indeed we already are, for example cell phones with GPS
like capabilities that are inaccessible to the user, but
accessible to someone else.

 > If so, I suggest that it must be a logical AND rather
 > than a logical OR. That is: if both ACLs and
 > capabilities are used, BOTH must permit the operation
 > in question in order for the operation to proceed.

To return to our locker analogy, post office boxes are
on the outside of the building, and whosoever has the
key can open them, and often the person with the key is
not the registered owner but someone acting on behalf of
the registered owner, but safety deposit boxes are on
the inside of the building, and to open them one needs
both the key and an identity that matches the list.

 > The problem with either-or systems is that things slip
 > through the cracks where (a) the two systems have been
 > configured in subtly different ways, or (b) the
 > overlap in what the two systems can express is
 > imperfect.

Protocols generally fail at the boundaries and edge
cases.  But one can only minimize boundaries and edge
cases, not eliminate them.  The solution is to do the
boundaries right, not to eliminate them altogether.  The
solution has to be at least as irregular and complicated
as the problem.


More information about the cap-talk mailing list