A stab at the sealer in E

Mark S. Miller markm@caplet.com
Sat, 06 Nov 1999 15:37:26 -0800


At 11:30 AM 11/6/99 , hal@finney.org wrote:
> > There is one anticipated alternate implementation of sealer/unsealer 
> that we
> > know we need to support well: actual public key cryptography.
>
>Is it an issue that the size of the sealed object may leak with this or
>other implementations?

No, nothing like that.  Rather, I'm exploring which of Ping's or MarcS's 
semantics, as the semantics to provide primitively by E, is a better 
semantics for other user-defined sealer/unsealer implementation to 
imitate.  By "user-defined", I mean "defined by unprivileged E 
code".  MarcS's and Ping's code, because they have an interestingly 
different semantics, also form two different executable specifications for 
the behavior that other sealer/unsealer implementation in E should satisfy.

Let us call implementations like Ping's or MarcS's "pointer-based public 
key" as opposed to "cryptographic public key".  Why do we also need a 
cryptographic public key implementation of sealer/unsealer?  To provide the 
power of cryptographic public key to the E programmer in the inter-vat 
case.  What is this power, over that of using pointer-based public 
key?  *Only* the removal of the mutually trusted third party.

Pointer-based public key provides the logical equivalence of public key for 
mutually suspicious objects residing in the same vat.  One might object 
that it does so only by relying on a mutually trusted third party: the 
shared TCB on which they are running, which includes the implementation of 
pointer-based public key.  However, an object is always fully vulnerable to 
the TCB on which it is running, so it does it no good to worry about the 
misbehavior of its own TCB.  Therefore, two co-located objects sharing a 
TCB can use that TCB as a mutually trusted third party without any loss of 
security.  Indeed, they cannot avoid doing so.

What happens if our pointer-based public key system is used between 
vats?  Each of the involved objects -- BrandMaker, sealer, unsealer, 
envelope -- are PassByProxy, so they are treated as the Purses are in our 
money example.  Each vat has its own unique primitive BrandMaker, and all 
sealers, unsealers, and envelopes that descend from a given BrandMaker are 
co-located with that BrandMaker (and therefore with each other).    Let's 
say Carol on VatC uses VatS's primitive BrandMaker to create a 
sealer/unsealer pair, and that she sends the sealer to Alice on VatA, and 
sends the unsealer to Bob on VatB. Let's say Alice on VatA uses a sealer 
(necessarily hosted on VatS) to seal the string "The bird flies at 
midnight", transmits a reference to the resulting envelope (necessarily 
hosted on VatS) to Bob on VatB, who then uses the corresponding unsealer 
(necessarily hosted on VatS) to unseal the message, and reads it.  Once 
this has all happened, what security statements can we make?

Well, in one way, this is like public key encryption:  Alice knows that 
only Carol, or an agent authorized by Carol (such as Bob) can unseal the 
envelope and interact with its contents.  In another way, this is like 
public key signatures:  Bob knows that only Carol, or an agent authorized 
by Carol (such as Alice) could have decided what to place in the 
envelope.  We even have non-repudiation.  Bob can hold onto the 
envelope.  If the object that comes out of the envelope is auditably 
immutable, then he can demonstrate to a third party, at the price of 
sharing access to this unsealer, that this envelope produces these 
contents.  (There's actually an additional but solvable problem here.  Ask 
me about it if you're interested.)

As far as I can tell, there is only one difference, from a security point 
of view, between the above distributed pointer-based public key scenario 
and the corresponding cryptographic public key scenario.  In the above 
scenario, all statements actually must be qualified with "given trust in 
the initial BrandMaker".  Since the initial BrandMaker is a PassByProxy 
object hosted on VatS, our trust in the BrandMaker cannot exceed, in degree 
or kind, our trust in VatS.  If the initial BrandMaker is instead a 
PassByCopy CryptoBrandMaker (or LazyCryptoBrandMaker), then everyone who 
would have had a reference to it instead has a copy, and likewise for the 
sealers, unsealers, and envelopes.  Now we can make all the same 
statements, but qualified instead with "given trust in the code of these 
local open-source objects", which is now the normal crypto situation.

Here's where the polymorphism comes in: If the pointer-based and 
crypto-based implementations are behaviorally equivalent in the ways that 
matter, then code written to work with one of these implementations should 
continue to work transparently when the other is instead provided.

Whew.

Now, why does this effect which pointer-based implementation we choose as a 
base, Ping's or MarcS's?  Because in the crypto-based implementation, the 
unsealer will ask the envelope for the cyphertext.  If there is a benign 
mitm, it will pass this request through, it will pass the cyphertext back, 
and all will be happy.  Therefore, the crypto implementation has the same 
tolerance of benign mitm as MarcS's.  Therefore, if we adopt MarcS's as a 
base, it is less likely that code written to use MarcS's sealers/unsealers, 
and debugged under that circumstance, will break when instead handed the 
crypto-based implementation.


>I am confused here: at one point you say that MarcS's code does leak
>information, then later you say that it does not actually leak information.
>Following up on that later thought, you write:
> > ...
>But what about the information leakage with regard to the timing of
>unseal attempts?  Is that so unimportant that you can ignore it by decree?

My fault.  I confused the issue by speaking simultaneously of two different 
leakage issues without distinguishing them.  Not only is this issue not 
unimportant, it is what much of the reasoning in the previous email is about.

MarcS's code itself indeed does not leak anything, in that none of MarcS's 
objects have a back-channel with which they communicate to any object not 
provided them.  MarcS's shared variables, privateCurrentContent and 
privateCurrentLoaded, would seem to be counterexamples to this statement, 
but their operational effect outside of MarcS code is as if there was no 
back-channel.   In the absence of auditors, an object handed one of MarcS's 
objects may not be able to determine leak-freeness from inside the system, 
but let's ignore that for just a moment.  For us, reasoning from outside 
the system, we can read MarcS's code and do an informal proof to ourselves 
that it is leak-free.

Does the protocol implemented by MarcS's code make a client vulnerable to 
leakage?  Yes, as you point out.    How can I say that the information 
leakage is the client's, and not a leakage by MarcS's code?  After all, 
isn't the dangerous message from unsealer to the possible mitm envelope 
being sent by MarcS's unsealer code?  Yes it is, but this isn't leakage by 
the unsealer.  I realize in writing this that I'm using a very specific 
meaning of "leakage" derived from KeyKOS's confinement concepts: Does an 
object have any communications channels that "I" (its client) haven't 
provided it?  If it does, then it can talk to them in ways that I can't 
see.  If it doesn't, then it can only talk to objects I provide it.  So the 
unsealer itself doesn't "leak", but the alleged envelope Carol got from Bob 
might leak.  If this is a bona-fide MarcS envelope, then it won't 
leak.  Or, it might be a perfectly safe mitm that doesn't leak, which 
should also be fine with Carol.  What Carol may want to know is, will this 
envelope, received from an untrusted source (Bob) leak when my unsealer 
interacts with it?

Our solution to this problem is auditors.  If Carol wants to protect 
herself, she would only accept envelopes from Bob that pass whatever 
auditor provides her the assurance she desires here.  In this case, the 
(unimplemented, and not yet fully-designed) auditor would be 
"confined".  However, if this auditor rejects a bona-fide MarcS envelope, 
even though *we* know it does not leak, because it fails the audit, then 
Carol is in trouble.  By the auditing rules of "confined", if X is not 
confined, then a mitm wrapping X cannot be confined, so no envelope that 
could unseal would pass the audit Carol requires.

Why would MarcS's envelope not pass the "confined" audit while Ping's 
envelope would?  The code for Ping's envelope would be

     define envelope :: confined { }

The "confined" auditor, examining this parse tree, sees nothing that 
indicates a danger of leakage.  The envelope does nothing, so how can it 
leak?  The code for MarcS's would be

     define envelope :: confined {
         to offerContent {
             privateCurrentContent := message
             privateContentLoaded := true
        }
     }

By assigning to these "private" variables, which are in fact shared with 
another object that reads them (the unsealer), it sure looks to the auditor 
as if the envelope is communicating information over a private channel that 
it encapsulates, rather than communicating only to objects provided by its 
clients.  Therefore, the "confined" auditor is correct to reject the 
audit.  However, this rejection creates the above problem for Carol.

The informal proof we go through in our heads that MarcS's code is 
effectively confined is pretty subtle, and well beyond what we know how to 
build into any auditor.  In fact, I'm queasy about whether it is even 
correct.  However, as TCB designers, if we delude ourselves into being 
confident enough, we can, within the rules, represent that confidence by 
decreeing MarcS's code to be primitive, and decreeing that it should be 
considered confined.  Integers, for example, are also primitives decreed to 
be confined, not by auditor-proof, but by TCB-designer confidence.


>One other question with regard to this "auditor" concept:
>
> > Since envelopes have a
> > side-effect free contract, this would normally not be a problem in a system
> > that supports confinement.  Once E has auditors
> > http://eros.cis.upenn.edu/~majordomo/e-lang/0986.html , including the
> > "confined" auditor, then, if Carol wants to unseal privately, she would:
> >
> >      to foo(...., box : confined, ...) {
> >          define contents := unsealer unseal(box)
> >          ...
> >      }
>
>I am confused about whether E securely provides the object definition to
>all who hold a reference to the object.  It seems in some of the examples
>(like the bank example) that clients hold capabilities to purses whose
>code is on a remote and potentially untrusted server.  So I don't see
>how an auditor could, given a reference to the "box", know in a secure
>way whether that box's implementation had a given property, if the box
>is a remote object.
>
>Does this auditor concept only apply to local objects?  Presumably those
>are ones where we can fully analyze their behavior.

The first approximation answer is "yes, it only applies to local 
objects".  If the argument corresponding to the above "box" parameter is a 
remote reference, then this remote reference will fail any such local check.

However, auditors are themselves PassByProxy objects.  If Carol has a 
reference to the "confined" auditor of the vat hosting the box, then she 
can ask it to audit the box:

       to foo(...., box, ...) {
           define box := remoteConfined <- coerce(box)
           ...
       }

(The "<-" reads "eventually", and gets into concurrency and distribution 
issues that are mostly orthogonal from security.  For now, I'll take the 
same shortcut as in the paper and continue to ignore security-irrelevant 
concurrency/distribution issues.  Erase the "<-" from the above code in 
your mind.)

However, though Carol necessarily trusts the confined auditor of her own 
TCB, she doesn't necessarily trust remoteConfined.  Her trust in 
remoteConfined can be no greater than her degree/kind of trust in 
remoteConfined's vat.  Assuming that box and remoteConfined are indeed 
co-located, this makes sense, as any message Carol sends to box is also a 
message she is sending to box's vat.  Vat's cannot be confined 
http://www.erights.org/elib/capability/dist-confine.html so her trust in 
box's confinement is limited by her trust that box's hosting vat wishes to 
cooperate in that confinement.

Carol's suspicion of that vat is adequately represented as her suspicion of 
remoteConfined, just as our earlier distributed pointer-based public key 
example adequately represented suspicion of VatS as suspicion of the 
BrandMaker hosted by VatS.  These are both excellent concrete examples of 
the statement in our paper "This is the main economy of the distributed 
capability model: we can, without loss of generality, reason as if we are 
only suspicious of objects." (at 
http://www.erights.org/elib/capability/ode/ode-protocol.html#subj-aggregate )

There is a further way in which auditors interact with distributed 
security, despite being only local.  PassByCopy implies open-source.  When 
an audited PassByCopy object is passed between vats, its state is copied, 
and its source parse-tree is copied if this is the first instance of the 
parse tree coming in on that connection.  If this is the first replication 
of this parse tree, the corresponding auditors on the decoding side are 
asked to audit the incoming parse tree before the code and the state are 
put together into an incoming object.  If the audit fails, the object 
decode fails.  As a result, all audited PassByCopy objects are always 
audited by the local auditors, so their audited properties can be as 
trusted as the local TCB.

Hope this helps.


         Cheers,
         --MarkM