Mark Miller, Norm Hardy, and I (and probably various others) have long held that principal-based access control is impossible unless the principals cannot communicate. In this note I'm going to examine that belief. Thanks in advance to Mark for acting as unwitting victim in my examples :-)
Put simply: if you tell me something sensitive, and I am able to pick up a telephone and call Mark, there is no way for you to prevent me from conveying the information to Mark if I wish to disclose it. Unless you can find some way to take away any possibility of communication between Mark and Me, you cannot control the disclosure. Short of sticking me in a locked cell or killing me (which sort of makes the disclosure pointless), there is no way to do this.
I have argued that in the real situations where a company is concerned to protect information, the real humans beings involved can talk in a hallway or on the phone. Humans are very efficient compression engines; they can take a large blob of information and whittle it down to the three key statements that someone else didn't know and give away the whole thing. Given this, I have argued that it is at best silly and at worst actively misleading to build computer systems in which you can state "this file can be read by Fred but not by Mary." Since this statement is not enforceable, it's actually dangerous to perpetuate the belief that protection exists.
The problem is that human beings aren't rational, and if I try to explain to, say, IBM that it is simply impossible to stop me from disclosing what I know they get very uncomfortable. (I may pay in court later, but stopping the disclosure is impossible).
Mark argues that there is a difference between a security policy and an admonition policy. A security policy is proscriptive. It incorporates statements of the form User X cannot do action Y on object Z. An admonition policy is takes the form User X *should not* be permitted to perform action Y on object Z. The difference lies in how "collusion" or "deception" is viewed. For purposes of security they are the same: one user either intentionally or unintentionally gives access to another when they shouldn't. The admonition approach argues that:
I'ld add to MarkM's suggestion that admonition systems are relatively easy to build, since the policy "admonish when" is simple for a reference monitor to implement. This is appealing.
I'ld also note that the argument against programatic proxies based on the fact that they are beyond the skills of most users to write is unmitigated crap, and hereby promise to write a trojan scripting language (Hey! A totally uncontested niche for E!)
I have come to wonder whether this argument remains correct when bandwidth is taken into account. It's true that users compress some things efficiently, but a lot of data simply doesn't compress very well. This limits the rate at which a colluding user can disclose. If we allow Jonathan to simply perform high-bandwidth IPCs to Mark, then data can be leaked as fast as the system will run. If we force the leakage to occur in a person-to-person form, then it leaks more slowly. This can, of course, be defeated by a grand conspiracy (hundreds of users conspiring), but such strategies have statistical access patterns that seem likely to get recognized quickly.
Mark has argued in the past that there isn't much bandwidth difference between a direct IPC and a CCD camera pointed at a display. With a modern CPU, I can capture the data very nearly as fast as it can be scrolled.
I initially thought that a form of bandwidth-limiting confinement could solve this, but in the face of parallel execution this is untrue.
I also want to note that the low-level protection mechanisms necessary to implement admonition systems are identical to the low-level mechanisms that implement principal-based access control. At the bottom, there must be a principal id and an access check (in our implementation we would probably rename it an admonition check).
Given that the mechanisms are the same, it is not clear that it is possible to build a system providing admonition that prevents the construction of software that alleges to provide principal-based controls.
So: can anyone propose a practical admonition design based purely on capabilities? Such a design must not rely on confinement, as disclosing the information to the *intended* user violates the confinement boundary in such a design.
Given that we cannot prevent disclosure, can we recover auditability?
Jonathan S. Shapiro, Ph. D.
IBM T.J. Watson Research Center
Phone: +1 914 784 7085 (Tieline: 863)
Fax: +1 914 784 7595