the cost of complexity (was: Re: [E-Lang] Java 2 "Security" (was: Re: Welcome ChrisSkalkaand ScottSmith of Johns Hopkins))

Mark S. Miller
Fri, 26 Jan 2001 16:17:43 -0800

At 06:00 AM Friday 1/26/01, Ralph Hartley wrote:
>Mark S. Miller wrote:
>>At 07:50 AM Wednesday 1/24/01, Jonathan S. Shapiro wrote:
>>>may feel that ACLs are a bad protection model, but it is inarguable that we
>>>can specify their behavior and enforce the specification.
>>As you know, and I believe agree with, disputes that the ACL security model can be enforced.  
>>Of the subset of ACLs that can be enforced, the only part of that subset not expressible in capabilities that's been identified is the one Ralph Hartley pointed out, also explained on that page.
>That is a counter example not an exhaustive list. When you make a statement 
>like "this proves there is no X", and I reply "but here is an X", you are 
>not justified in saying "this proves there is only one X". A "proof" of a 
>proposition with a known counter example is like an OS with a known security 
>hole; one is enough, but there is never just one.

I agree, and never said or meant to imply anything else.  That's what the 
"that's been identified" qualifier was supposed to be about.  In an earlier 
message in this thread I wrote:
>Of course, if my hypothesis is wrong, so much the better.  If capabilities 
>are not strictly superior on all practical counts, your experimental 
>language may be the best way to find this out.  Previous debate on e-lang 
>has already (to my great surprise) uncovered a case where ACLs could, in 
>principle in some contexts, express and enforce a prohibition that 
>capabilities could not 
> .  
>Although I don't consider this case to have practical effect, since math 
>doesn't care much about the boundary between practical and impractical, the 
>existence of one case makes plausible the existence of others, some of which 
>may have practical effects.

Back to Ralph:
>I still think the best example of an enforceable non transferable power is 
>one that you dismissed as not computational enough. Mary wishes to allow Bob 
>to sleep with her, she does not want to sleep with Mallet, even if Bob 
>want's her to. This does have a computational equivalent. Mary wants to 
>allow the program Bob to run in a sealed box whenever Bob, who also runs on 
>the outside, makes a request, she does not want to let mallet do that. Bob's 
>identity can be proven by a signature, and he can't take anything from 
>mallet with him into the box.

I simply don't understand this example.  What does "that" refer to in "she does not want to let mallet do that"?  She does not want to allow mallet to do what?  Please assume I'm stupid and spell it out.

>Another example is a life estate. Mary wants to allow Bob to do something as 
>long as Bob lives. Bob is permitted to transfer the power during his 
>lifetime, so running a message laundry does not violate the rule. Assuming 
>Mary has a reliable way to know that messages actually come from Bob 
>(cryptographic signatures can be given away or stolen so they are not 
>reliable), Bob has no way to give Mallet the right in a way that will 
>survive him. Programs, processes, connections etc. do die.

Can you state this case without apparently counterfactual assumptions, or 
explain why the "assume" above is actually realizable?  Or explain why we 
shouldn't care that the assumption is counterfactual?

Are you assuming that Mary cannot otherwise tell if Bob is dead?  Obviously, 
if she can tell, then the problem becomes trivial -- Mary revokes the 
capability when she finds out Bob is dead.  In that case, Bob would have 
delegated the full right Mary gave him -- the right to use the power until 
Bob is dead.

>A more concrete example would be where Bob is defined as the entity on the 
>other end of a connection. Mary wants to give out a power that only lasts as 
>long as that connection. She could issue a capability that is revoked when 
>the connection closes IF she is sure she will be notified. If the underlying 
>protocol identifies the connection a message comes from but gives no notice 
>when a connection closes ...

This is a realistic case, especially as both E and Droplets do, in fact, 
drop capabilities on connection loss.  Although both E and Droplets, being 
pure capability systems, don't place any fundamental security properties on 
this connection loss, Ralph's argument might imply that something would be 
gained by doing so.  (Whether it would then be a good engineering tradeoff 
is another matter, but we can't make an informed tradeoff unless we 
understand what we might be losing.)

Ralph, I wouldn't worry about the notification issue.  If this is Mary's 
intention, she can insist on a periodic keep-alive from Bob as her condition 
for not revoking the capabilities.

In any case, in Ralph's scenario, Mary has succeeded in giving out a power 
that only lasts as long as the Bob-Mary connection.  The right that Bob now 
has is a right that only lasts this long.  Bob can still proxy to Mallet the 
full power he has -- a right to use the power only so long as the Bob-Mallet 
connection stays up.

This situation is similar to the earlier one -- under the trivializing 
assumption that Mary can detect when Bob is dead.  In the earlier case, the 
trivializing assumption is probably not what Ralph intended.  In this case, 
Mary's revocation depends on her detection that the Mary-Bob connection is 

In neither case does Mary gain anything by suppressing the 3-vat Granovetter 
handoff between the power (acting as Carol), Bob (acting as Alice), and 
Mallet (acting as Bob).  Put another way, when Mary receives a proper 
invocation on the power, it should never matter to her what connection this 
invocation is received on.  However, when the Mary-Bob connection dies, she 
should then revoke the capability to the power that she gave out.

In E, Mary can detect connection loss using "whenBroken" (see E in a 
Walnut).  Since it is something Mary *can* detect, there is no reasonable 
way to prevent Mary from using this information to make a revocation 
decision, nor should there be.  I see nothing in this that violates the 
claim that Mary is operating in a pure capability system, and therefore that 
a pure capability system is able to achieve the effect Ralph explains above. 
Nothing about this picture seems ACL-like to me.

>The bottom line. ACLs can enforce things capabilities cannot if and only if 
>there exist verifiable identities that really matter. Since ACLs are based 
>on identity this should not be a surprise.

Except for non-electronic or seemingly counterfactual values of 
"verifiable" ("sleeps with" and "electronic but better than crypto" 
respectively), I don't believe you've identified any further cases.  
However, as my quote above indicates, I am *not* claiming you won't.  As we 
agree, the existence of one counter-example make plausible the existence of 
others.  We should all continue looking.