A stab at the sealer in E

hal@finney.org hal@finney.org
Sat, 6 Nov 1999 11:30:00 -0800

MarkM writes a very interesting and helpful analysis, but I am left with
some questions:

> There is one anticipated alternate implementation of sealer/unsealer that we
> know we need to support well: actual public key cryptography.

Is it an issue that the size of the sealed object may leak with this or
other implementations?

> You are also correct that MarcS's code leaks information about when an
> unseal attempt (or an imitation of one) happens.
> ...
> If box is not an object that the "confined" auditor considers to have
> whatever property it audits for, then it throws an exception rather than
> coercing.  Curiously, even though MarcS's code does not actually leak
> information, it violates the design rules the confinement auditor checks for
> -- it assigns to an outer variable.  The technique requires this violation,
> so it cannot be repaired without breaking the trick.

I am confused here: at one point you say that MarcS's code does leak
information, then later you say that it does not actually leak information.
Following up on that later thought, you write:

> I think MarcS's is the better semantics, except for the vulnerabilities to
> "benign" mitm.  If we start with MarcS's, and if MarcS's envelopes do not
> pass the "confined" or "stable" auditors, then Carol cannot use these
> auditors to filter out mitm that leak information or aren't stable.  If
> MarcS's is implemented as unprivileged code written in E, then the trick
> that it uses prevents our auditors from deeming it confined and stable, even
> though we know that it is.  But if we adopt MarcS's code into the TCB as a
> primitive (whether or not it remains written in E), and decree it to be
> confined and stable by special decree, we seem to have the best of all
> worlds.  Our primitive is polymorphic with its virtualizations, and the only
> mitm that can get by our auditors are truly benign.

But what about the information leakage with regard to the timing of
unseal attempts?  Is that so unimportant that you can ignore it by decree?

One other question with regard to this "auditor" concept:

> Since envelopes have a
> side-effect free contract, this would normally not be a problem in a system
> that supports confinement.  Once E has auditors
> http://eros.cis.upenn.edu/~majordomo/e-lang/0986.html , including the
> "confined" auditor, then, if Carol wants to unseal privately, she would:
>      to foo(...., box : confined, ...) {
>          define contents := unsealer unseal(box)
>          ...
>      }

I am confused about whether E securely provides the object definition to
all who hold a reference to the object.  It seems in some of the examples
(like the bank example) that clients hold capabilities to purses whose
code is on a remote and potentially untrusted server.  So I don't see
how an auditor could, given a reference to the "box", know in a secure
way whether that box's implementation had a given property, if the box
is a remote object.

Does this auditor concept only apply to local objects?  Presumably those
are ones where we can fully analyze their behavior.