[E-Lang] Re: Java 2 "Security"

Jonathan S. Shapiro shap@cs.jhu.edu
Wed, 03 Jan 2001 10:09:43 -0500

Marc Stiegler wrote:
> True enough. Preventing data leak seems far more difficult than preventing
> capability leak, every time I look at it. Have any of the people who've done
> capabilities for decades noticed this?
> --marcs

Absolutely. More broadly, everyone who has looked at or worked on covert
channels has looked at it as well, and there is surprising overlap
between those people and capability people in the OS world. The general
concensus is that you can significantly reduce the channel bandwidth,
but in the end you can't stop people from transmitting data at low rates
unless you change the design of the system in ways that make it
pragmatically useless.

I think that at this point there remain only two reasons to continue
work in the covert channel area:

1. There remains data whose utility derives from the size and
comprehensiveness of the data set. Channel bandwidth limits make it
significantly harder to steal such data.
2. The kind of understanding of performance and behavior contracts and
resource multiplexing that you obtain by doing covert channel analysis
often results in a system that is both cleaner and faster. The analysis
therefore has value above and beyond its effect on channel bandwidth.

But if you really want a challenge problem, consider integrity: how can
you have confidence that the oh so secure data is actually valid data?
This is a much harder problem than security, and it is not really
amenable to formal analysis.

I want to add that in my mind covert channel analysis falls under the
heading of "assurance" rather than "security". While we have good
systematic methods for identifying possible sources of covert channels,
we do not know how to automate these methods yet, and we do not know how
to verify (in some formal sense) that covert channel requirements are
met in some particular implementation. The difference between security
and assurance in my mind is this: security is about determining what
policies are useful and feasible to enforce (i.e. setting realizable
requirements), designing/architecting mechanisms that enforce them, and
then verifying that these mechanisms work (hopefully formally).
Assurance is about evaluating an implementation in a systematic way in
order to achieve some degree of confidence about the realization. The
Common Criteria assurance process, for example, does not guarantee that
a system is secure. It guarantees that a significant amount of energy
has gone into checking the *process* by which the system was generated.
As the development process is what yields (or fails to yield) security,
this is a very valuable thing to put energy into examining.

Sorry -- a bit off topic, but I hope informative.