> > All powers can be described as the ability to send or receive a
> Within the electronic world, I see no alternative. I suspect this is where
> our real disagreement lies.
Yes indeed. Remember that electronic is NOT the same as computational, if it were there would indeed be no alternative.
> If you are speaking of the physical world, sure.
> However, this isn't the physical world we are talking about here -- it is
> the world inside of computation.
This is an assumption that you hadn't stated before. It is a strong assumption, which rules out much of what people use computers for.
If your security model can't deal with real locks and doors what good is it? Must we have another security model for securing non computational things? If so that model needs to include computational security as a proper subset.
> Isn't it the case that whatever powers a computational entity has, it
> exercises those powers by issuing some kind of invocation or command? Is
> not that invocation or command described by information that is somehow
> communicated to an underlying system that can bring it about? For example,
> a process under an operating system exercises powers by performing a system
> call. This communicates only information across the user/system boundary.
Even in the totally abstract electronic world there are things that cannot be described in computational terms. Consider property, in the old days land could not be sold. The king (Alice) would grant title to her vassal (Bob) who was NOT free to pass it on to anyone he liked (e.g. Mallet). He could give Mallet all the products of the land, and he could let Mallet boss the peasants around, but he could not transfer ownership. This was not a distinction without difference, it mattered a lot to Bob's heirs, and bob could always change his mind about cooperating with Mallet.
Do not be confused by the fact that land is a physical thing. The same concepts can apply to purely electronic resources.
You might be able to build a consistent security model restricted to purely computational actors and resources, where only getting information and giving commands mattered. But I doubt it. Trust itself is not defined computationally. If you did it certainly wouldn't be good for much. Whenever you certified a system as secure you would have to add small print saying "This warranty is void if the system is ever used in the real world."
To show how tricky it is, I will note that your claim has a counter example even on your terms. The problem (I think) is that you are thinking of a power as a purely computational thing while trust is not computational.
Suppose the power Alice gives Bob is the ability to communicate PRIVATELY with Alice. Bob can relay massages for Mallet, but the communication is no longer private; Bob could listen in. Adding encryption can't help because Alice would have to agree to add another layer of encryption to an already secure channel (which if she is interested in keeping Bob from transferring his power she will not do). Bob could give Mallet all the encryption keys but Alice will only accept communication routed through Bob (or broadcasted so bob can recieve it), and she uses symmetric encryption so that if Bob keeps the keys he can still read everything.
Bob could promise not to read any messages, and since he is cooperating with Mallet he keeps his word, but there is no way for Mallet to know this. Nowhere was it stated that Mallet trusts Bob, or any facilities available to bob, the fact that Bob actually is loyal to Mallet doesn't help Mallet's peace of mind.
> If Mallet can instruct -- only by
> communicating information -- Bob to issue the invocation or command Mallet
> is interested in, how is that different, for Mallet's purposes, from Mallet
> issuing the invocation or command himself?
Because Bob might know about it.
Also Mallet cannot trust Bob to keep cooperating in the future (even if he will).
Clearly what Mallet has is less than what bob has.
This does get more interesting the more you think about it doesn't it.