[cap-talk] Re: [e-lang] Introducing Emily, "...capabilities are
daw at cs.berkeley.edu
Wed Mar 1 16:38:42 EST 2006
Constantine Plotnikov writes:
>>>Lets make a contrived example:
>>>1. There is component A that listen to network commands.
>>>2. There is a component B that can launch ballistic missiles.
>>>3. The component B is not reachable from component A using pointers.
>>>However if authors of component A and component B are in conspiracy, it
>>>might be still possible to give a command by network to launch ballistic
>>>missile using covert channels.
>Security goal was to ensure that there is no back door that allows to
>launch missiles by giving command using network.
Upper bound analysis of component B will show that this security goal
is not met. More precisely, upper bound analysis will fail to prove
conformance with this security goal. A good systems designer will take
this as an indication that maybe component B needs to be re-implemented in
a way that makes it possible to verify conformance with the security goal.
>The sample shows that
>no audit that takes into account only local properties of the object can
>make sure that system is secure from certain kinds of attack.
Local analysis of the code of B suffices to determine that it is not
possible to verify conformance with this security goal.
>If upper case analysis is used, we have
>just to conclude that everything is possible until we have reviewed
>every line of the code.
That's not true. If component X has a capability that is closely held,
we may be able to verify how X uses that capability just by inspecting the
source code of X -- and it may be able to draw some conclusions about what
is possible, even without seeing any of the rest of the code of the system
(by making worst-case assumptions about the rest of the code).
>Capability graph is not generally equal to
>authority graph due to covert channels.
Indeed. But if the system is designed in the right way, upper bounds on
the capability graph are often sufficient to demonstrate conformance to
some natural security goals.
Note that upper bounds on the capability graph can often be used to
derive upper bounds on the authority graph, if the system is constructed
appropriately. For instance, if the upper bound on the capability graph
determines that the missile will never be launched, then this gives an
upper bound on the authority graph as well. If an upper bound on the
capability graph determines that the missile can not be launched lacking
capability C, then this gives an upper bound on the authority graph, too.
Sometimes the upper bounds on the capability graph are too coarse, and
then we might need to fall back on some other types of reasoning, or else
give up on the hope of verifying the security of the system. Oh well.
Capabilities aren't a silver bullet.
>/We should not bother plugging authority leaks [using capabilities],
>because it does not give meaningful guarantees about authority leaks. We
>can leak authority anyway using covert channels./
Well, that reasoning isn't valid in this case. Here's an analogy:
"A screwdriver isn't useful for hammering in nails. Therefore, a
screwdriver isn't useful. Therefore, we should not bother stocking
screwdrivers in our hardware stores."
That's bogus reasoning, because it fails to recognize that a screwdriver
might be useful for some other purpose else. Back to our case, capability
confinement is indeed useful for many purposes, even if not for everything
you might want.
In contrast, banning two colluding adversaries from communicating via
exceptions is not useful. It is not useful for any purpose. There is
no purpose for which it is useful. Therefore, we should not bother
putting such bans in our programming language.
>The statements has the same logical structure and talking about the same
>thing, so to be consistent one should accept or reject both IMO.
I don't think they are parallel, and I hope the above explains why I
More information about the cap-talk