[cap-talk] keeping discussion on cap-talk - network capabilities,
ben at algroup.co.uk
Mon Nov 1 06:39:41 EST 2004
Toby Murray wrote:
> [Please excuse the teresness of the message below. Its not meant to
> indicate emotion, merely lack of time to formulate a benign reply]
> Mark Miller wrote:
>> Mark Miller wrote:
>>> Jed Donnelley wrote:
>> The distinction made in Paradigm Regained between "controlled subject"
>> and "uncontrolled subject" relates directly to your distinction
>> between "protected execution" and "mutual suspicion". My prior
>> attempts to explain object-caps, such as
>> did not include the notion of a loader or evaluator, for turning data
>> describing behavior (such as instructions) into behavior. (In KeyKOS,
>> this is the tool that makes a Start Key from a Domain Key.) Until we
>> explain the loader, we have a system able to support mutual suspicion
>> but not protected execution. All meaningful suspicion would only be of
>> uncontrolled subjects, and we cannot explain confinement.
>> Most of the object-cap patterns that can be explained without
>> mentioning a loader can indeed be supported as well in cap-as-data
>> systems, including crypto-cap systems. Mutually suspicious machines on
>> open networks are uncontrolled subjects to each other and cannot be
>> confined -- they cannot provide credible evidence of their lack of
>> access to each other.
>> But membranes (see <http://www.cap-lore.com/CapTheory/Glossary.html>)
>> are still possible even in such restricted object-cap systems, whereas
>> they are not in cap-as-data or crypto-cap systems.
> It might be due to a lack of experience in this field (only about 2
> years), or a personal lack of ability, however, I don't find the above
> references that helpful in deciphering the definiton of a "membrane".
> Could you spell it out for me?
> Secondly, has it been proved that a cap-as-data system can't achieve
> this property? What is the reasoning behing your assertion?
> On this subject, I'd like also to point to a couple of statements in
> previos posts that I find to be somewhat contradicting.
> Jonathan asserts in
> "...you cannot confine a process whose capabilities you cannot
> detect, and *any* capabilities as data model, regardless of protection
> mechanism, suffers from the deficiency that the capabilities held by the
> process cannot be audited. Without this, the process cannot be confined."
> Later, from Jed in
> "...the only places the right to a "password capability"
> could be exposed (e.g. in memory or on a communication
> channel), also contain other potentially sensitive information
> that could also be so exposed. One can argue that any
> such information should be protected, so what is so special
> about a capability that it should require additional
> The best counter that I have for that argument is to consider
> the example of working with a dump (e.g. from a program crash).
> My program might be an editor run or a scientific simulation
> or .... It faults and I decide to take it to some higher expert
> to deal with the problem. I may know that the data in the
> dump is not sensitive. However, what about any rights
> (e.g. capabilities) that the program has? It may have rights
> to various files or even (heaven forbid) rights to my "home"
> Why can humans read out the caps in a memory dump but not a machine?
I think the point here is that you have to grant access to the dump, and
Jed is providing motivation for granting it.
> this not a data-mining/searching problem then to audit the process?
> Difficult but not impossible. Am I missing something here? (I suppose
> the program could obfuscate to prevent audit, but why would a system
> admin install such a program on a machine and why would a user trust it
> with their caps to their home directory etc.?)
The issue (I think) is that this actually is impossible to do with
certainty. And I would install a program to obfuscate capabilities
precisely because I want to frustrate your attempt to audit it.
> We all know that a number of times in the capability field assertions
> are often taken as fact without solid (or at least a generalised)
> argument. Too many times we hear that "X is impossible" with little to
> no proof. Let's not continue this tradition.
> Anderson et.al. showed that confinement can be achieved in a (what I
> would call "pure") password capability system.
ApacheCon! 13-17 November! http://www.apachecon.com/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
More information about the cap-talk