the cost of complexity (was: Re: [E-Lang] Java 2 "Security" (was: Re: Welcome ChrisSkalkaand ScottSmith of Johns Hopkins))

Jonathan S. Shapiro shap@eros-os.org
Wed, 24 Jan 2001 10:50:00 -0500


This has become a very silly discussion, because the participants are
speaking at cross purposes.

There are elements of a protection system whose architecture can be
demonstrated to be mathematically correct. For such elements, there is no
*architectural* justification for defense in depth w.r.t. to the things
protected by these elements. There is a strong argument in such cases
*against* defense in depth: a multiplicity of mechanisms leads to confusion
and errors.

Of these elements, some (but not all) can be implemented with high
confidence subject to the requirement that the underlying hardware is not
tampered with. For example, the EROS constructor mechanism is simple enough
that we can do an in-depth inspection of the constructor code and verify
that the implementation is correct. Others are complex enough that they
cannot be validated in this way. If you cannot validate the implementation,
defense in depth may be appropriate.

Most of the elements above are things that I would consider "protection
mechanisms" (though confinement sits on the border between protection and
policy). Capabilities and ACLs are examples of protection mechanisms. You
may feel that ACLs are a bad protection model, but it is inarguable that we
can specify their behavior and enforce the specification. The question of
their utility is a question about the utility and relevance of the
specification, not the correctness of the mechanism. Please for a moment let
us set the merit debate aside for the sake of addressing the issue at hand.

Layered above the protection mechanisms there are security policies. These
can sometimes be formally reduced to something rigorous. If they prove to be
reducable then they are sometimes realizable in assured implementations.
Some policies can be rigorously described but are nonetheless very difficult
to implement: the lattice model is an example. Once again, defense in depth
is a reasonable approach to balkanizing against implementation errors.

Outside of this there is a class of security policies that can be described
as follows:

1. I do not know enough or do not have the skills to express the objective
rigorously
2. I know that some particular behavior has been a source of difficulty in
the past or is likely to be a source of difficulty in the future
3. I can clear identify the problematic behavior, and I wish to prevent it
from occuring.

Some of us (myself included) find such unprincipled approaches unsettling.
However, it is simply stupid to believe that we will build a usable system
without the ability to support this. The bottom line is that the real system
must exist in the real world, and principled understandings of patterns of
attacks only emerge after the pattern is recognized as a pattern. During the
period when this understanding is emerging, only ad hoc mechanisms are
possible.

The problem with ad hoc mechanisms is that they are necessarily fuzzy and
incomplete. This is where defense in depth is appropriate. There is nothing
wrong with stating multiple, similar filters that may catch a class of
errors.

Some of you have argued that mandatory policies are, in effect, a waste of
time. This is simply naive. As a trivial but compelling example, consider
packet filters. There are many types of packets that we do not wish to pass
to untrusted applications or from untrusted applications, because we can
detect from their content that transmission of these packets will have
undesirable consequences (such as getting our machine kicked off the net).
Such filters are never perfect. They evolve and new problems are found with
them. They are nonetheless valuable.

But the real problem is that ALL of you arguing against defense in depth
have made a fatally flawed assumption: you all assume that some application
can be trusted to mediate the exchanges and install the necessary ablity to
rescind access. There are a number of very serious problems with this
assumption:

1. It assumes that the software can be trusted. I believe that if you
consider more carefully the implications of introducing new object types,
you will conclude that no pre-installed (therefore trusted) software is
possible that satisfies this assumption fully.

2. It assumes that when the user says "yes" they mean "yes". This is
empirically untrue, and dialogs that say 'are you sure' rapidly come to be
greeted by a habitual ENTER. This is a UI problem, but I suspect it is
endemic in any such mechanism regardless of how expressed

3. It does not account usably (note: "usably", not "correctly") for
transitivity. Access to an object should not depend on the path by which you
obtained the object. Consider Alice xmits cap to Bob who xmits it to Mary.
Imagine we intend that Mary *should* have valid access to the object named
by this cap. We require a mechanism in which Bob's access can be rescinded
without rescinding Mary's. Mark Miller will argue that access should be path
dependent. The problem with this is that all programs must now program
defensively against path rescind. The defensive programming becomes so
pervasive that a more centralized solution becomes mandatory from an
engineering perspective.

Before any of you continue to argue for manageability on the grounds that we
can build correct mediation code, I urge you to consider that the very very
best programmers generate 1 security error per thousand lines of code, and a
more typical number is 1 per hundred lines. Proceed from the assumption that
flawless software can only be achieved in very specialized circumstances and
then re-argue the case.

Also, even if we assume that the software is correct, nobody has offered a
cohesive architecture for mediation. It's all rather a handwave, and until
we have a real and evaluatable architecture for mediation none of us really
know what can be mediated successfully. I confess that I do not see how to
build such a thing, but then I have not put any great amount of energy into
it (yet).


Jonathan