[E-Lang] defense in depth
Mark S. Miller
Sun, 28 Jan 2001 23:44:28 -0800
At 01:49 PM Wednesday 1/24/01, Nikita Borisov wrote:
>Of course, this additional layer does not come without cost. But the
>question is to trade off this cost of defense in depth against the
>potential benefits. It seems that a lot of people in this thread are
>asserting that the additional benefit is zero, which I find hard to
>believe. Am I misunderstanding what people are saying?
I can only speak definitively for myself, though I do believe Tyler
especially is saying compatible things. I think Tyler & I both tried to
make a distinction between two kinds of redundancy that aid security, but
I'm not sure if this distinction took. Both sides of this distinction might
be called "defense in depth", so, to avoid confusion, I'll use two different
names: "redundant resistance" and "debugging".
* Redundant resistance: On the one hand, we have defenses in the physical
world against physical attack, in which quantitative degrees of attack are
matched against quantitative degrees of defense, and an overwhelming
quantity on either side often leads to success. In this world, redundant
defenses are sensible precautions against ever greater efforts by an attacker.
In computer security, I said these kinds of defenses are sensible regarding
the embedding of computation in the physical and social world (social
engineering attacks, microwave attacks, power analysis, etc), and regarding
the faithfulness of the realization of our security primitives out of more
primitive material (cryptanalysis).
In this thread, when people used the phrase "defense in depth", I've been
taking them to mean this case (redundant resistance) specifically. As this
thread has developed, I realize I may have interpreted them wrongly. If so,
apologies for the confusion.
* Debugging: OTOH, we have defenses within the logical/mathematical world in
which those security primitives are the "laws of physics". In this world,
our vulnerability to attack is almost never due to greater efforts on the
part of an attacker, but rather to a bug in our own defensive logic.
Redundancy at this level should be targeted at catching us in our own
stupidity, not at containing the damage from an attacker that has mounted a
heroic effort and overcome our first line of defense. The "rely" and
"suspect" notation I proposed is precisely an example of such redundancy for
debugging, as are static type systems in normal software engineering.
A note on perfection: I never claimed that perfection was actually possible.
Rather, the danger to ourselves is our shortfall from perfection, and
therefore it's better to spend the effort debugging (looking and fixing
logical flaws) rather than piling up logically flawed defenses on one
another. This corresponds directly to conventional wisdom in non-security
software engineering. When one reads code and sees some parts of a
program whose purpose is to compensate for bugs in other parts of a program,
this isn't normally taken to be a sign of robustness. Robust code gets that
way, not by compensating for bugs in the hope of surviving them, but by
making bugs cause a clear and diagnostic failure, so they get fixed early.
However, in light of MarcS' posts, I now realize that my main defensive
tool, the Principle of Least Authority (POLA) (also known as the Principle
of Least Privilege) fits in both the above categories. It fits in the
debugging category because it vastly aids our informal proofs-in-the-head
when reasoning about dangers. It so radically bounds the dangers left to
consider in examining a particular body of code as to make the examination
vastly more tractable, and vastly more likely to spot actual
vulnerabilities. Not perfect, but a huge help.
But, MarcS also makes a good point when he observes that, in a system built
according to a strict and fine grained application of POLA, an attacker that
get past one set of defenses (because of uncaught security bugs, not because
of greater attacker effort) will *often* (not always) find there's only very
limited authority for him to abuse, and further locked doors between him and
more authority. POLA also leads to redundant resistance. My retraction:
I think this is good.
(Note: There are also many places where there's only one wall between an
attacker and the ability to be all powerful, such as Norm's example of the
ability to take control of the MMU. For these, I continue to believe the
correct answers are careful examination, debugging, and redundancy only in
support of debugging, not redundant resistance.)
The KeyKOS inspired systems (KeyKOS, EROS, Joule, E) all use POLA both to
aid bug finding, and as redundancy in resisting an attacker. Despite having
retracted my earlier claim, I still find no criticism of the systems from
which I abstracted this claim. Rather, I find I understand the strength of
these systems better. My respect has only gone up for them, for POLA, and
yes, for redundant resistance.
Having conceded the value of redundant resistance, I reluctantly concede
that there is, in theory at least, as David suggests, a possible tradeoff
between techniques that aid in debugging's quest for logical perfection, and
techniques for redundant resistance. POLA doesn't force us to consider this
tradeoff, as it aids both. Should we find a genuine tradeoff, it'll be
interesting to examine it. My strong suspicion: it's never good to retreat
in the quest for logical perfection, as (within this mathematical world) our
own confusion is always a greater danger than our attacker's efforts. In
the systems that inspire the greatest confidence in me and in many others I
respect, I see no examples of such a retreat. I look forward to examining
some concrete proposed tradeoffs.
With these distinctions in hand, would anyone like to try restating what
kind of defense stack introspection is supposed to be?