[cap-talk] Security and Full Abstraction (was: Cap OS question)
naasking at higherlogics.com
Fri Sep 4 21:41:50 PDT 2009
Good links! Certainly the suitability of bytecode verification depends
on the properties being verified. If one's goal is to merely ensure that
static state is accessed immutably, and static authority-bearing methods
are called only via a tamed API, I think this can be achieved by
Joe-E is slightly more ambitious, as its scope extends beyond merely taming.
There are certainly problems here, as a language translation to IL may
make use of the static state and methods in its runtime, but I don't see
anyway around simply pulling the language runtime into the TCB in this
case, or providing a IL-rewriter to refactor any egregious security
Mark Miller wrote:
> On Fri, Sep 4, 2009 at 4:41 PM, Sandro Magi<naasking at higherlogics.com> wrote:
>> Unfortunately, CAS incurs quite a bit of overhead due to all the runtime
>> checking, so it would be far less efficient than bytecode verification.
>> I'll have to read up on the JVM limitations Joe-E ran into to see if
>> similar problems would occur on the CLR.
>> I can't in principle see why a Mono.Cecil verifier wouldn't work; the
>> Mono project has built quite a few bytecode verification tools, and
>> Microsoft's forthcoming CodeContracts library performs quite
>> sophisticated static analysis of bytecode-level pre/post conditions and
> A good way to think about the possible problems of a bytecode vs
> source verification approach is Abadi's "Full Abstraction"
> <http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-154.html>. In
> looking for <http://research.microsoft.com/en-us/um/people/akenn/sec/appsem-tcs.pdf>,
> which is a nice explanation, I also found
> which is a corresponding nice set of slides I didn't know about.
> Having been reminded of Abadi's notion of "Full Abstraction" and its
> relevance to security, it occurs to me that the concept stretches to
> cover a point made in
> Conventional UI design may instead reason "We don't want to encourage
> our users to engage in action #3, as that is dishonest and
> uncooperative. If users do that, the general trust other users place
> in them and in the system will be lowered." The full-abstraction-like
> alternative I take on that admonition page is "Because Bob can neither
> be prevented nor detered from engaging [in dishonest] action #3, it is
> better to provide option #3 in the user interface. To do otherwise is
> only to provide Alice with a false sense of security." We need to
> consider the effect of the UI design not only by their influence on
> Bob's actions, but also by their influence on Alice's model of Bob's
> possible actions. This also fits with Ping's points about the security
> significance of the models users form about what actions are possible
> for others <http://people.ischool.berkeley.edu/~ping/sid/>.
> The analogy seems tight: The distinction between "what's
> cryptographically enforceable over a distributed system" vs "what
> actions can one take through a UI" is like "what actions are possible
> by verified bytecode" vs "what actions may be expressed in the higher
> level source language".
More information about the cap-talk