[cap-talk] Selling capabilities programming

James A. Donald jamesd at echeque.com
Sat Jul 21 18:22:30 EDT 2007


Jonathan S. Shapiro:
 > James: The pattern of your conversations is that you
 > declare them to be over when they do not immediately
 > go your way -- most notably when someone who knows
 > much more (in a purely technical sense) about the
 > subject at hand tells you that you are wrong.
 >
 > [...]
 >
 > I also encourage you to actually listen to and think
 > about the responses, which you do not appear to be
 > doing.

In your conversation with Jed Donnelly, he demonstrated
that controlling and tracking transfers of authority
between entities does not change the security properties
of the system, and that therefore such tracking and
control is a utility of minor value, to be done
according to convenience and feasibility, rather than an
essential and vital characteristic.  That demonstration
should have relegated the issue of whether capabilities
should be "protected" to a mere implementation detail,
one way of doing things among many, rather than a vital
principle to be hotly defended.

Since Jed's discussion appeared to have little impact, I
thought it entirely useless to add my own.

Prematurely specifying some characteristics of a system,
and setting those characteristics in stone as vital,
while other characteristics are left entirely
unspecified or vaguely inconsistent with the stone hard
characteristics, is apt to result in indefinite schedule
slip, or endless "progress" towards a never to be
accomplished goal.

 > As things stand right now, *you* are the single
 > largest impediment to the distribution of Coyotos,
 > because you are so busy wasting our time that you are
 > stopping the job from getting done.

I long ago discovered that not everyone is going to
agree with my brilliant reasoning, therefore the correct
conduct is to present one's reasoning in the briefest
fashion, such that an intelligent person who wishes to
understand should follow and agree, for to go on at
greater length is unlikely to persuade someone who does
not wish to agree.  I am attempting to follow that
policy, rather than engage in pointless and aggravating
debate.  I urge you to also follow that policy.

 > Perhaps there is some other building block that you
 > think will work. If so, I would be interested to here
 > it described.

If you say so, I will describe it in more detail than I
have already described it, though I think such
description highly unwise for the reasons I just
explained above.

Indeed, I expect that attempting to explain this matter
yet again, far from being welcomed, is likely to get me
blocked from this list, but here goes, yet again, since
you asked:

Consider our example of aunt Vera, who is sitting in
front of machine on her desk, commanding a program on a
machine in her closet, to access a file on little
Johnny's computer.

We assume that aunt Vera has somehow earlier logged in
to the machine in the closet.  Currently used solutions
to that problem have serious problems and security
flaws, but capabilities are not the solution to that
class of problem, so we will just assume it solved, and
focus on problems to which capabilities are the
solution.  There is a server on the closet machine, and
a client on the desktop, both of them highly privileged
programs, with ample rights to do all sorts of dangerous
things.  We aim for them to safely pass more limited
capabilities to other programs, so that we only have to
trust some software instead of all software.

Through her client, aunt Vera launches the edit program
in the closet. When launched, it, or its container,
receives a secret shared with privileged code on the
desktop computer.  This establishes a symmetrically
encrypted channel, that enables it to have a window on
the desktop, whose title bar is not entirely under its
control.  The title bar, or the first part of the title
bar, contains the title of the launch file that launched
the editor.  This secret provides a channel, and the
channel provides a capability, and were anything else to
obtain that secret, that something else could interfere
with the window.  Normally the editor program does not
directly access this secret, but instead the operating
system utilities in its environment store and use this
secret on its behalf, but that is an option that
misbehaving software is not required to follow, in
particular software on computers on the network that
aunt Vera does not control can do as they please with
any information they might obtain, so the secret has to
be kept secret.  The secret provides a capability, and
even if all software on a certain computer is prevented
from utilizing that information to obtain a capability,
ensuring the same is true for every computer on the
network is likely to be difficult, as different
computers are likely to be owned and controlled by
people with different agendas.

Within its window running on aunt Vera's desktop, the
editor program running in the closet, can launch a
powerbox running on aunt Vera's desktop, which can
select a file anywhere on the network that aunt Vera can
access from being logged in on the desktop.  Through the
powerbox, aunt Vera is somehow logged on to a file
server on little Johnny's computer - another very tough
problem I am skipping over, a problem that has in the
past resulted in worm propagation throughout Microsoft
networks through file servers, and a problem that really
needs a solution before capabilities programming becomes
useful over networks.

The powerbox on aunt Vera's computer has a secret shared
with the editor in the closet, providing a channel to
and from the closet, and a secret shared with the file
server, providing a channel to and from little Johnny's
computer.

At present, one normally grants file permissions for
users.  In the kind of capability system envisaged, one
could grant file permissions for uses, programs, and
programs running under a particular user.  If a file
permission is granted to a program, the program can
access the file at its own initiative, if granted to a
user, the *user* can grant access to the file select
menu.  So if a user has permission to access the file,
the file will show up in the file select powerbox.

Aunt Vera selects a file.  File server and powerbox
agree on a new shared secret, which will provide a new
encrypted channel.  Whosoever possesses the new shared
secret, can access that file.  There is a timeout on the
new shared secret, which can be renewed by pinging, or
by reissue.

Now if one implemented all of the above such that all
the secrets were held by code running at ring<3, OS
code, then the secrets are protected from code running
at ring 3 - at least on that particular machine.
However, ensuring that secrets are held by ring<3 code,
while it may well be an excellent idea, depending on the
details of the processor and operating system, does not
change the security properties of the system, since
anything that hostile or buggy ring 3 code could do by
leaking secrets, it could do by misusing capabilities.
As I said earlier, what could be done by an editor
leaking the capability to the file it is supposed to
edit, could be done by an editor with trojaned macros.

Regardless of the details of how the secrets are
protected, there is no globally accessible set of
identities and capabilities such that one can ascertain
what entities hold what capabilities, because no entity
is global to the network - there are many users, many
powerboxes, many machines, nor can there be any
guarantee that nowhere on the network could there be
hostile code that would snatch and misuse any secrets it
could detect, thus one needs to take the point of view
that it is necessary to frequently generate fresh
secrets, and hide them - the view that capabilities are
represented by secrets - that secrecy, rather than
protection, is the essential characteristic, and that
capabilities, once granted, can be and likely will be
abused, that the point is to issue small capabilities
limited in time, not to attempt to control the use and
subsequent communication of those capabilities.

 > The question is: communicable to *whom*? If these
 > authorities are send to an independent party, then
 > yes, something bad has happened. On the other hand,
 > sending these authorities to a subprogram that is
 > confined and exclusively held may make a great deal of
 > sense.

This, however, is only an argument for protected
capabilities if one uses "confinement" in the special
sense that assumes protected capabilities.  Otherwise it
is merely an argument for capabilities, not necessarily
protected capabilities - in other words your definition
begs the question, resulting in a circular argument -
your argument assumes that which has already been
refuted, both briefly by myself, and lucidly and at
length by Jed Donnelly - assumes that "confinement" so
defined is crucial. As Jed put it, any code that
possesses a capability and can that can communicate with
an entity can accomplish any bad effects by any
communication, that it could by communicating a
capability to that entity - thus it is impossible to
confine and closely hold a subprogram in the required
sense.  Badly behaved software can always in effect
transfer capabilities badly, and in ways that cannot be
tracked, which means that the value obtained by
controlling and tracking transfer of capabilities is
rather limited. In my more specific example, I pointed
out that a confined and exclusively held editor with
trojaned macros will have the same ill effects as an
unconfined and not exclusively held editor that leaves
copies of capabilities as data lying around in memory
that others can access.

Because you make a circular argument, we go around in
circles, and each time we circle, the temperature goes
up.  The only solution is to let the matter rest.

All my arguments, I have already made, earlier a
sufficient length, now at excessive length.  Similarly
you repeat yourself.


More information about the cap-talk mailing list