>If A and B are at the mercy of the capability system for their
>current state, might it be possible to "back out" inappropriate
>communication after the fact, rather than to ensure that only
>appropriate communication was possible on every context switch?
I'm not sure what you mean by "at the mercy of", so I can't address that part, but as to the rest:
There actually has been some work on this in connection with a system called "Time Warp". The idea was to do speculative computation and roll back the threads whose inputs didn't turn out to be consistent with the speculative assumptions. Your proposal involves a similar kind of speculation. There are a number of problems with it.
First, in order to roll things back in this way, you would need to track where all the communication had gone. If you think that through, I believe you'll conclude that everything communicates with the system storage allocator, and that in the end you would need to roll back the *rest* of the system to maintain causality. Second, in order to decide what to roll back, you would need to have tracked what got transmitted. This requires a per-communication transaction, which increases the cost of the communication by about a factor of 10 in the usual case. Thus, even if it were feasible to do what you propose, it would not be cost effective.
Also, it may be that you misapprehend the reason for the controls. The issue is not programs that need "rework." The issue is programs that are provided by third parties and are potentially hostile *on purpose*. That is, we are not trying to test programs to verify that they are well behaved. We are trying to externally impose a constraint on them whether they like it or not.
In any case, I'm unclear what benefit you hope to get by deferring the checks. I can imagine some boundary cases, but I wonder which ones you might actually be thinking of?
Separately, note that forgery of capabilities must not be possible. There is no way to tell a forged capability from a real one, so if capabilities can be forged, you don't have a capability system. Prevention of forgery is accomplished by one of two mechanisms: partitioning or encryption. In EROS, partitioning is used. Data operations cannot change or examine capabilities, and capability operations cannot change or examine data. Encryption is also possible, but that has two problems associated with it:
>> which the confinement box can disclose things directly; the confinement
>> is an *initial* contract; it's continuance depends on the actions of the
>It sounds like you're saying here that confinement *would* help,
>if it weren't for the inconvenient fact that the users of a
>system live outside of the confinement?
Yes and no. Confinement helps, in the sense that you can ensure that a confined program won't have leaks. It also helps in that the *program* knows it cannot be tampered with from the outside (i.e. the user cannot reach in and grab something they shouldn't).
The catch is that the entity inside the confinement box is taking direction from the user (or rather, from their software). The user might simply hand the entire confinement box to somebody else, and the confinement box has no way to know who is directing it. While we could build a system with principal id's, this wouldn't help for reasons stated in previous mail -- the legitemate user could build a proxy front end and hand *that* to somebody else, rendering their principal ID opaque.
Thus, putting a sensitive capability inside a box prevents it from being given out directly, but does not prevent it from being abused.
One case where such boxes *are* helpful is when you have a capability that is too powerful, and you only want to give a user a subset of its function. You can build a wrapper object around it to accomplish this. Note, however, that it doesn't prevent the user from giving the *wrapper object* out to others.
Jonathan S. Shapiro, Ph. D.
IBM T.J. Watson Research Center
Phone: +1 914 784 7085 (Tieline: 863)
Fax: +1 914 784 7595