A stab at the sealer in E

Mark S. Miller markm@caplet.com
Sat, 06 Nov 1999 04:29:42 -0800


At 03:24 PM 11/3/99 , hal@finney.org wrote:
>I don't understand E very well, but I wonder if there is something
>like the "man in the middle" attack possible in some implementations.
>The idea is that Alice seals data, gives the box to Bob, who gives it to
>Carol, and Carol has the unsealer.  However, Bob can wrap the box in a
>fake box of his own before he gives it to Carol, with his fake box just
>passing through any messages to the inner box.
>
>In this case it is OK, because nothing needs to be returned by the box.
>
>The one piece of information Bob could learn is _when_ Carol chose to
>unseal the box.  This is a small leak but it is perhaps undesirable.
>
>I think Ka-Ping's table version would be immune to this because it can
>tell if it is a valid box without saying anything to it.

Fascinating observation.  It shows that there are actually several subtle
but important semantic differences between MarcS's version & Ping's version,
even though they both implement the abstract idea of
sealer/unsealer/envelope faithfully.  Ignoring garbage collection, Ping's
has the same semantics as the primitive implementation, so we will only
contrast MarcS's and Ping's.

Let's say we did put a man in the middle (mitm).  Easy enough to do in E:

     define fakeBoxMaker new(originalBox) {
         define fakeBox {
             to messageWeWishToOverride(...) : any { ...alternateBehavior... }
             to ...
             delegate { originalBox }
         }
     }

The lines beginning with "to" are methods.  If we leave them out, then
(except for Miranda Methods which we probably don't need to cover here), the
resulting fakeBox is a pure transparent forwarder to originalBox.  However,
the purpose of putting a man in the middle is to cause some mischief, and
these methods allow one to express such mischief easily.  The reading is: On
an incoming message, if it matches any of the listed methods, dispatch to
that method.  Otherwise, evaluate the expression in the delegate clause and
forward the message to its value.

You are correct that neither Ping's nor MarcS's code (nor the primitive one)
are vulnerable to mitm, for the normal meaning of "vulnerable".
Nevertheless, using mitm, we can explore their semantic differences.


                                               Is It Memorex?


Even a pure mitm wrapping a real box, one that overrode nothing and caused
no mischief, would be recognized as a valid box by MarcS's code, and the
unsealing would proceed.  Not so with Ping's or the primitive, both of which
rely on object identity.  The object identity check tests the references
without consulting the objects they refer to.  A system with EQ -- a
primitive object identity check, which E has and Joule banishes -- prevents
a mitm from presenting a perfect fiction.  Indeed, this is Joule's argument
against providing an EQ primitive.  For Ping's code or the primitive, only
the real box is good enough for the unsealer.  For MarcS's code, apparent
behavioral equivalence by messages is good enough.  Let's call the tolerance
that MarcS's code has for a mitm that does no damage "benign mitm".


                                                Virtualizability


In http://eros.cis.upenn.edu/~majordomo/e-lang/1051.html I show a trivial
implementation of sealer/unsealer/envelope in terms of more primitive
sealer/unsealer/envelopes.  Often, in layering abstraction on each other, we
want to reproduce the interface contract of the lower-level abstraction in
the higher-level one, as with virtual memory made out of physical memory.
The two resulting implementations are polymorphically substitutable -- for
example, a user mode instruction doesn't know or care, and can't tell,
whether it is accessing physical or virtual memory.  (Mitm is a kind of
virtualization.  An MMU is a kind of mitm.)

My VirtualBrandMaker, as is, will work fine built on top of either Ping's or
MarcS's code.  However, when built on Ping's, it is not quite
polymorphically substitutable with Ping's.  Ping's rejects a benign mitm but
this code, built on Ping's as an attempted virtualization, accepts a benign
mitm in much the same way MarcS's code does.  I believe the
VirtualBrandMaker built on MarcS's code would be polymorphically
substitutable with it.

There is one anticipated alternate implementation of sealer/unsealer that we
know we need to support well: actual public key cryptography.  The
"sealer seal(contents)" operation in the implementation cannot be
polymorphic with the normal one (either Ping's or MarcS's), because it can
only seal off-line serializable objects.  However, for objects that pass
this restriction, the rest of the abstraction can be polymorphic just fine.
In the cryptographic implementation,
* the BrandMaker generates an actual public key pair,
* the sealer is a PassByCopy object containing the encryption key
   (classically, the "public" key) of the pair,
* the envelope is a PassByCopy object containing the cyphertext whose
   plaintext is an off-line serialization of the contents, and
* the unsealer is a PassByCopy object containing the decryption key.  It
   both decrypts and unserializes.

This implementation, to be written in E (with the cryptographic algorithms
provided primitively), and with the envelope actually containing the
goodies, will have the benign mitm tolerance of MarcS's rather than Ping's
code.

(The actual implementation I want to build is a bit more complicated.  It
starts off virtualizing a non-cryptographic implementation, but internally
converts to the cryptographic technique and representation the first time
any of the participating objects get encoded across a vat boundary.  Call it
"lazy cryptography".)


                                              Confinement


You are also correct that MarcS's code leaks information about when an
unseal attempt (or an imitation of one) happens.  Since envelopes have a
side-effect free contract, this would normally not be a problem in a system
that supports confinement.  Once E has auditors
http://eros.cis.upenn.edu/~majordomo/e-lang/0986.html , including the
"confined" auditor, then, if Carol wants to unseal privately, she would:

     to foo(...., box : confined, ...) {
         define contents := unsealer unseal(box)
         ...
     }

To make the declaration syntax less mysterious, the above code has
essentially the same effect as

     to foo(...., box, ...) {
         box := confined coerce(box)
         define contents := unsealer unseal(box)
         ...
     }

If box is not an object that the "confined" auditor considers to have
whatever property it audits for, then it throws an exception rather than
coercing.  Curiously, even though MarcS's code does not actually leak
information, it violates the design rules the confinement auditor checks for
-- it assigns to an outer variable.  The technique requires this violation,
so it cannot be repaired without breaking the trick.

There's also an possible infinite regress problem above.  When an auditor
checks an instance above, it does not run the audit over the instance.
Rather, the code that the object is an instance of must have been decorated
with an auditor

     define box :: confined { ...

The auditor in this role statically audits the code, and, using a technique
based on sealers/unsealers, effectively signs the code.  (NOTE: The precise
protocol here has yet to be worked out, and it is likely that this
discussion will clarify some of the issues.)  Then, the auditor in the
earlier "box : confined" role essentially does a signature verification.
Since we are using such signing/verification to provide confinement, if we
also need to use confinement to provide user-level sealer/unsealer pairs, we
may not be making progress.


                              How Benign is Benign MITM?


So far, we've been speaking as if the only danger of mitm is information
leakage.  However, consider

     define fakeBoxMaker new(originalBox) : any {
         define fakeBox {
             to messageWeWishToOverride(...) : any { ...alternateBehavior... }
             to ...
             delegate { originalBox }
         }
         define boxChanger(newBox) { originalBox := newBox }
         [fakeBox, boxChanger]
     }

Bob calls fakeBoxMaker with the box he receives from Alice.  He sends the
resulting fakeBox to Carol and keeps the corresponding boxChanger.  Now the
fakeBox that Carol received from Bob may unseal one time and not another, or
unseal to different results at different times, all at Bob's choice, even
though Bob does not have access to the sealer.  If Carol only ever unseals
any given envelope once, this presents no danger.

But it would be very easy for a programmer to get in the habit of thinking
that envelopes were stable when they weren't.  The "stable" auditor, which
is weaker than "confined" (confined implies stable) would prevent this
problem by rejecting the fakeBox.  It should be impossible to write a
fakeBox that performs such mischief and passes the "stable" audit --
otherwise our definition of "stable" is broken.



                                                    What to do?


I think MarcS's is the better semantics, except for the vulnerabilities to
"benign" mitm.  If we start with MarcS's, and if MarcS's envelopes do not
pass the "confined" or "stable" auditors, then Carol cannot use these
auditors to filter out mitm that leak information or aren't stable.  If
MarcS's is implemented as unprivileged code written in E, then the trick
that it uses prevents our auditors from deeming it confined and stable, even
though we know that it is.  But if we adopt MarcS's code into the TCB as a
primitive (whether or not it remains written in E), and decree it to be
confined and stable by special decree, we seem to have the best of all
worlds.  Our primitive is polymorphic with its virtualizations, and the only
mitm that can get by our auditors are truly benign.

This is a significant improvement in our semantics, and a significant
clarification of issues.  MarcS, Ping, Hal, thanks!!

(Though the above infinite regress issue with auditing remains somewhat
open.  We can't truly settle the issue until we have an auditing protocol.)



         Cheers,
         --MarkM