[cap-talk] What Horton can do: accountability in cap systems.
capability at webstart.com
Sat Jun 2 22:00:20 EDT 2007
At 04:49 PM 6/2/2007, David Hopwood wrote:
>Jed Donnelley wrote:
> > [NLTSS] was one of the
> > implementations that was criticized in this document:
> > TRADITIONAL CAPABILITY-BASED SYSTEMS:
> > AN ANALYSIS OF THEIR ABILITY TO MEET THE
> > TRUSTED COMPUTER SECURITY EVALUATION CRITERIA
> > http://www.webstart.com/jed/papers/P-1935/
>My view on this paper (including my, admittedly somewhat bluntly
>expressed, view of the competence displayed by its authors) has
>not changed from what I said in
I recall that and of course my response which I again
looked over and still feel fits my views:
I hope you understand that it wasn't just the P-1935
paper. That was just one example and served to enforce
the dominant view of the time that I believe is even
more dominant today. This view is so dominant that
the relative few that are even aware of capabilities
as a means to POLA generally believe that they are
inadequate for other reasons (e.g. MLS and accountability)
and are impractically complex (e.g. Lampson)
I expect that we have at least one element of agreement
regarding a lack of understanding in that paper and in
the dominant view - namely with regard to what we've more
recently referred to as the "cooperating conspirators"
'problem'. The authors of P-1935 and I believe actually
most in the IT community don't appreciate the impossibility
of blocking the sharing of authority among communicating
entities ("conspirators"). When they reflect on what
seems to them a lack of control in capability systems
(the open sharing of capabilities across communication
links) I believe their expressed concerns don't show
adequate appreciation for the fact that blocking such
sharing of authority (e.g. proxying) is impossible
in any system.
However, beyond that I feel considerable sympathy for
them in their expressed desire for accountability for
actions - as required by the TCSEC.
> > 2. Identification of responsibility for permitted
> > actions.
>It would be a perfectly self-consistent and defensible position to
>argue that that the need for 2 was greatly overblown in the historical
>criticism of capabilities, and that if we want to be able to provide 2,
>we should do so either simply or not at all.
I have two reactions. First with regard to doing so simply - by
all means! If people have simpler mechanisms from Horton, please
However, regarding the need for responsibility tracking for
actions being "overblown" - I don't agree. I think this
would be a good topic of discussion for the list.
The need for accountability is so deeply ingrained in
human systems, including IT systems, that I can well
appreciate the need people feel for such accountability.
When it then comes to what MarkM refers to as "reactive"
control (e.g. cutting off or otherwise manipulating
access based on identity) I also feel that this is
a deeply felt need for people and their organizations.
>After all, current non-capability systems generally *don't* provide 2
>in a usable or reliable way. I would argue that they often provide it
>in a way that does more harm than good, by allowing a determined attacker
>to frame innocent users. Partly, that is because they fail to usably or
>reliably provide 1, and so the mechanisms that are intended to provide 2
>can be bypassed or subverted.
I certainly appreciate your argument. However, to argue that
because today's systems can be subverted that what they are
trying to do is not worthwhile I don't believe will sell
(has not sold).
I know that in our center one of the primary tools of
defense against unauthorized or otherwise inappropriate
actions is disabling users. In most cases these actions
are being taken by hackers that, as you say, have framed
innocent users. However, the disabling of accounts
is still effective until such time as trust can be
reestablished for access only by appropriate users.
Of course in capability systems we can block user
authentication and thus new connections to "shell"
processes that are trusted with a user's resources.
However, once an account is compromised or otherwise
abused, it is difficult in all capability systems
that I'm aware of to block access through capabilities
that were communicated, delegated, or are being
proxied. This is an area where I believe something
like Horton adds value.
Hmmm. In thinking about this just now in terms of my
own suggestion about a possible Horton "improvement"
(eliminating C from the delegation from A to B), I
can imagine one value that I would be losing. By
including C in the loop, C could time stamp the
delegation. One might find that reverting (blocking)
any delegations after a certain time (e.g. the time
of an account compromise) may be important. I don't
see how such a time stamp could be trusted if C
was eliminated from the delegation process, since,
by assumption, A and B are not to be trusted. I
suppose one could provide a trusted time stamping
service, but at that point we seem to be really getting
>My initial reaction to Horton was also that it is undesirably complex
>for the job it is doing. Maybe we can find something simpler that will do
>effectively the same job.
To which I again and enthusiastically say, bring on those
ideas for simplifications! I also found it difficult to
follow the Horton protocol when MarkM first diagrammed
it up. I think part of that was the presentation, though
I'm afraid I am at a loss to come up with a better presentation.
I believe the basic idea is quite simple - as I described
in the simplified description that I wrote in response to
James Donald's criticism. I also took a slightly different
angle on the problem in my description where I tried
to eliminate C from the delegation from A to B.
From my perspective we are very early in our analysis,
re engineering, etc. of Horton and related protocols that
answer the basic need for accountability in capability
systems. I hope I speak for all when I welcome any and
all discussion on this topic.
I also believe discussion about whether the accountability
that a facility like Horton provides is worthwhile is in
itself a worthy discussion. I've worked on and with many
capability systems that did not provide for such accountability,
and we didn't seem to miss it. However, looking back on
those experiences now, I believe I can see how we were
missing something that made others uncomfortable. I can
see how there was a substantive basis to their criticism
of a lack of accountability in capability systems. A
criticism that I hope we are able to redress with our
work on responsibility delegation mechanisms in capability
systems, like Horton.
I believe that capability systems can do all that ambient
authority ("user") systems can provide and a great deal
more. Even beyond POLA I believe capability systems can
do 'identities' better than systems like Unix and Windows,
1. Providing for records of explicit delegation and
control based on those delegations,
2. Much better accounting for mixed identity operations
(e.g. where the needs of multiple people come together
and are served by a computer system).
I'm sure there are more. Once we have the ability to
effectively express fine grained access communication
(e.g. without delegation, "traditional" capability
communication) and delegation I believe we will be
able to get much more out of our computer systems.
I believe the two most important steps in selling this
as a positive direction are:
A. Demonstrating that nothing is being lost, and
B. Demonstrating that such systems can be practically
>However, I don't agree with James' analysis
>that Horton is too complex as a result of the use of mathematical reduction
>in its design or description. Mathematics in general, and its use of
>reduction-based proofs in particular, has been *wildly* successful; it's
>the bedrock of modern science and technology.
We have the same reaction there.
More information about the cap-talk