[cap-talk] gauntlet - one way IPC considered useless, practical MLS?

David Wagner daw at cs.berkeley.edu
Mon Jan 9 23:41:18 EST 2006

Jed writes:
>It is possible to have some trusted intermediary (the "MLS system" or 
>any other) manage 'control' communication for the one-way 
>communication.  [...]
>One thing these approaches have in common is that they utilize 
>two-way communication to some trusted subsystem to effect one-way 

Yes!  Exactly.  You put it far better than I knew how to.
(In many cases, that trusted subsystem might be the kernel itself.)

>I consider such constrained one-way communication systems where a 
>subsystem must know the semantics and appropriately filter the back 
>channel as essentially useless.

I don't know whether it is useless or not.  Here is one kind of concrete
practical (albeit simplified) example you can think of: I want to download
and run a free tax preparation application that I found on the net.
It asks me for my salary.  I need to type in my salary if I want it
to calculate my taxes for me, but I don't want the app to be able to
leak information about how much I earn each year to the programmer who
wrote it.  That's a kind of thing that you can imagine might possibly
be useful, if it were achievable.

>Is this argument at this point an academic 
>exercise or are there real and useful systems that are currently 
>depending on such a mechanism?

My contention is that it is primarily academic, as far as I know.
I do not know of any real and useful Bell-Lapadula-style MLS multi-user
system that actually does live up to its security goals (e.g., closes
all covert channels).

Part of the back-history is that in the 1980s the military said that they
wanted such a mechanism, and depended on computer scientists to provide
such a mechanism, and they provided a great deal of funding to study the
problem.  Then again, "C2 by 92" was a total fiasco, and if we judge the
military by what systems they actually bought rather than the research
they funded, maybe they weren't depending on this mechanism after all.
It's all very confusing to me.

P.S. My understanding of how these confidentiality goals are achieved
today, I believe, is through physical isolation and single-level systems
(i.e., the system does not contain data at more than one level), possibly
combined with special-purpose data diodes (e.g., the Starlight hardware
device).  But then I have no inside information, this is my attempt to
glean practice from very little hard information, and I might be totally
wrong about this.

I've also heard of use of KVM switches that are guaranteed not to leak
data from one machine to another (so you have two CPUs sitting on your
desk, one for unclassified stuff and one for secret stuff, but one
keyboard and one monitor and one KVM switch).  And you may have heard
of NetTop, which attempts to emulate this in software using virtual
machines (though I don't know what they did, if anything, about covert

>One other thing I noticed in this thread is that a number of people 
>mentioned MLS systems where some portions of the system must be 
>trusted to violate the * and ss properties.  This makes me wonder how 
>one decides which subjects get such special trust.

As far as I know, they are often exempted from the rules because there is
no other choice (i.e., they are trusted, but not necessarily trustworthy).
Frequently these subsystems get extra scrutiny because it is realized
that they pose extra risk, but I think the process often starts with the
need to exempt them from the rules rather than some out-of-thin-air
determination that it would be safe to exempt them from the rules
(whether needed or not).  That sounds reasonable to me.

I'd be interested to hear more about working MLS systems, too.
Does anyone have any information?

More information about the cap-talk mailing list