[E-Lang] Java 2 "Security" (was: Re: WelcomeChrisSkalkaandScottSmith
of Johns Hopkins)
Jonathan S. Shapiro
shap@cs.jhu.edu
Thu, 01 Feb 2001 07:40:04 -0500
David Wagner wrote:
>
> Jonathan S. Shapiro wrote:
> >...The real problem is that it has long since been formally shown
> >[Ullman '74] that the ACL class of protection mechanism is broken. The
> >"safety problem" is undecideable in an ACL system, from which it follows
> >that the security of an ACL system cannot in general be shown.
>
> I guess I have two questions:
> Doesn't the same result also apply to capabilities systems?
As it happens, no it doesn't. The two models really are different at a
fairly deep level -- something that has been consistently missed by
people who don't understand one or both models well.
In the *finite* case, the capability model is both decidable and safe
[Jones, Lipton, Snyder], [Bishop]. In the finite case, the ACL model is
also decidable. Unfortunately it is unsafe.
> Why should the undecidability bother me?
Because it means that there is no way to know that a system is secure.
In the finite case: because it means we know the system is insecurable.
> Note that the model of an access control system used in their paper
> was agnostic about how access rights are stored. (If they are stored
> with the subject, you get a capabilities system; if stored with the
> object, you get an ACL system.) Thus, it seems to me that their
> results apply equally to both capabilities and ACL systems.
The difference between the two is not about how the rights are stored.
It's the commands. The difference lies in the rules for making revisions
to the graph. This has the feel of a very productive conversation, but
if you haven't done so, please go read the Harrison Ruzzo and Ullman
paper "Protection in Operating Systems". It will save everyone a lot of
reading if we all have common terms to use.
> In practice, whether you use a capabilities-based system or an
> ACL-based system, you will modify and restrict it appropriately
> until you can prove whatever theorems you want to prove.
One can certainly modify the designs, and this is a promising direction
of research. The difficulty is that decidability result -- it proves to
be staggeringly easy to take a workable design and turn it into a
non-workable design. People have been trying to come up with a workable
ACL-based design for about 30 years. They have basically failed. If
you've got one that works, don't be shy -- publish it!
> Second, I'm not sure why undecidability should be a problem.
> Undecidability says it's not possible to give a general algorithm
> that statically says, for all possible systems, whether that system
> is safe. But in practice, we never need such an algorithm; we only
> want (at most) to prove things about the single system we're building,
> and we're willing to modify the implementation to enable us to find
> proofs.
There are two kinds of undecidability. General undecidability is as you
say. It is also, of course, possible to ask whether a problem is
decidable in some particular model. ACLs fail the particular model test.
Being *willing* to modify the system is not the same as knowing *how* to
modify the system to achieve success.
> To argue that undecidability is fundamental in all security systems,
> no matter how they are structured, consider this code fragment:
> [...]
> if (f())
> do something insecure
> [...]
> where f() is a horrendously complicated function, and where "do
> something insecure" might include broadcasting all your capabilities,
> or whatever.
No no. Deciding safety and deciding the halting problem are completely
different. Safety is specifically not determined by the behavior of
programs. It is determined by the relationships between entities in a
system.
> In general, static analysis of any non-trivial property is undecidable
> (Rice's theorem), so for every static analysis algorithm, there will be
> some safe systems that you can't prove to be safe.
That's probably true, but the question then becomes: "what do we mean by
trivial?" and "is there practical value in particular trivial models".
The answer to the second appears to be "yes".
Jonathan