[cap-talk] Security by safe language processing (was: Re: Memory access based OS security)
capability at webstart.com
Sat Sep 5 00:04:25 PDT 2009
At 05:43 PM 9/3/2009, Bill Frantz wrote:
>bklooste at gmail.com (Ben Kloosterman) on Monday, August 3, 2009 wrote:
> >Hardware-level security through address management was introduced
> as a way to work around failures of
> >the application languages (:shudder:, remembering early
> implementations of Windows). But if you force
> >applications to compile to a particular language (or bytecode),
> you can enforce security at the
> >software level and achieve security without the sacrifices to
> performance that come from partitioning
> >the address space.
>The idea of depending on the compiler or byte code verifier in a "language
>based system" (LBS) is quite attractive since it might result in a higher
Higher performance in that it can avoid hardware context switches that
are typically rather expensive? I'm sure we all know of many ways that this
issue has been addressed (with whatever success or not). If you mean more
than this by "result in higher performance", please let us know.
>However it does impact the assurance aspect of secure
I agree - below.
>With systems like KeyKOS, EROS, CapROS, Coyotos, VM/370, etc. the security
>assertion is that no sequence of machine language instructions is able to
>subvert the security of the system.
Not to mention Unix, Windows, etc. This is far and away the dominant model,
I believe for good reason (below).
>The only compiler that needs
>verification is the one used to compile the system, and it only needs to be
>verified for the features used by the source code of the system. Since only
>a limited amount of programming needs to be verified, auditing the output
>of the compiler is a feasible, if tedious, possibility.
>In contrast, LBS system compilers and/or byte code verifiers must be
>verified against all possible inputs, which seems to me to be a harder
Not only that, but the LBS system compilers and/or byte code verifiers
must be valid against all possible inputs over a long range of time -
perhaps all time.
I know of very few systems that depend on language processing for
security. One example that I'm aware of was the Burrough B* systems
(B5500, B5700, B6500, B6700). These systems depended on their
compliers for security. One problem with this approach is that if
there is a bug in a complier that generates unsafe code at any
time and one can save that code, then it is unsafe for as long as
it can be saved. I "owned" a B6700 system while I was in college
because it briefly had a bad compiler that I used to generate unsafe
code that I was able to save.
If you use this approach of protection by "safe" language processors,
how do you save and later execute such processed (e.g. binary/byte) code?
Is code that's considered safe at one time still considered safe
at a later time? If it's considered safe at one time but mistakenly
isn't, how does this problem get corrected?
I don't understand how this approach can be made to work.
More information about the cap-talk