[cap-talk] Forgeable capabilities
toby.murray at comlab.ox.ac.uk
Mon Jul 9 16:11:48 EDT 2007
On Mon, 2007-07-09 at 12:42 -0700, David Wagner wrote:
> Toby Murray writes:
> >S(t) = getAuthority?login?password ->
> > if login == X and password == Y then
> > return!t -> S(t)
> > else
> > return!null -> S(t)
> >Alice(login,password) = getAuthority.login.password -> return?cap ->
> >Now if we instantiate Alice as Alice(X,Y) and then check whether Alice
> >can ever acquire t (using a safety analysis check) then we'd find she
> >If we instantiate Alice otherwise (i.e. she doesn't know X and Y) she
> >won't be able to acquire such permissions.
> Well, that sounds unconvincing. My reaction would be to suspect that
> the model must be making unfounded assumptions if it comes to those
> conclusions starting only from that information. For instance, it
> sounds like the model must be assuming that if Alice isn't instantiated
> with the password then she cannot guess it and cannot learn it (e.g.,
> from someone else who knows it); but that's not necessarily a reasonable
> assumption. Reasoning about knowledge is easy if you don't insist on
> soundness. The hard part is reasoning about knowledge in a sound way,
> especially given the possibility of guessing, covert channels, timing
> side-channels, leaks due to garbage collection and other oddities, and
> so on. Real systems have many features that pose a tremendous challenge
> for sound reasoning about knowledge.
I take your point. I hadn't interpreted your original meaning correctly.
I had thought that you were referring to the inability of current access
control models to reason about the knowledge that a subject possess as
part of the initial conditions of the system; rather than the absolutely
thorny issue of reasoning about acquirable knowledge.
Notwithstanding, I believe that the level of accuracy of the CSP model
is probably about what Pierre had in mind with his original question.
(Although Pierre, please correct me if I'm wrong.) The model accurately
distinguishes between the cases where Alice knows the password
initially, and when Alice doesn't, and cannot learn it -- which might
well be an unsafe assumption, indeed.
The most easily exploited vulnerabilities arise at the moment because of
excess permission. The bar might soon be raised (if we get some decent
least privilege adoption in the mainstream) so that the most easily
exploitable vulnerabilities arise as the result of excess authority
(rather than permission) c.f. confused deputy vulnerabilities. I expect
(or at least, strongly hope) that it will be some time before the
easiest way to breach the majority of systems is to learn secret
knowledge via non-overt channels. I'd like to get good solutions for the
first two cases first.
The safety problem is well understood, i.e. we can reason effectively
about acquirable permissions. My own work is trying to reason about
acquirable authority, which until now has only been done so in terms of
acquirable permissions. Acquirable knowledge is harder again (perhaps
orders of magnitude), but I don't see that our current lack of ability
here should detract from the more immediate concerns. (Although I don't
mean to imply that you were doing so. I just wanted to put that thought
More information about the cap-talk