[E-Lang] MintMaker with ACLs
Jonathan S. Shapiro
Sun, 04 Feb 2001 13:58:09 -0500
> When Jonathon Shapiro replied that this was essentially how EROS works,
> Tyler added,
> > Interesting, it didn't immediately occur to me that Hal had solved the
> > problem my making callback objects capabilities.
> I am surprised to hear that you consider the use of cooperative processes
> in different protection domains to be the introduction of capabilities.
> In other contexts you have also described various proposed ways of
> addressing ACL problems as being de facto capabilities. What is it
> about my proposal for addressing the callback problem that makes it
> look like a capability system?
In every real implementation I can think of, the following is true:
When the callback crosses the process boundary from caller to callee, it
does so not via an ACL-regulated mechanism, but by serializing a message
across a communications descriptor (often, but not necessarily, a
socket). That descriptor is a capability. While it is not passed between
the two processes, a channel of this form already exists by prior
arrangement in order for the two programs to be communicating in the
Note further that the I_SENDFD option of the STREAMS package exists
precisely for the purpose of transfering such descriptors.
As an aside, I'ld add that the networking stack in UNIX is probably the
easiest piece of code to adapt into other operating systems. While there
has been a lot of header-file cross corruption, the net stack is almost
completely free of misguided access control policies. While I could
raise structural objections (too monolithic), I can also argue from data
that in this instance monolithic structure may be justified. The only
remaining issue, then, is the absence of meaningful authority checks on
the socket() call that creates new sockets.
> This is a good point, but after all you are exploiting a bug in the
> program which you are fooling, right? It was careless and didn't do
> things as carefully as it should. Such bugs are common and fill the CERT
> advisory lists....
This is one of those lines of reasoning that is factually accurate and
utterly misleading at the same time. The best programmers make one
security error per 1000 lines -- that's *after* the Q/A team is done.
Given this, a security design predicated on the correctness of
general-purpose code is simply irrational. For closely controlled,
small, highly restrictive code bases it is perhaps possible to purge all
bugs. In general, however, it is not possible. Therefore, any reliance
on bug-free code in general purpose systems belongs in the realm of
opium dens, not in the realm of serious security dialog.
Arguably, the most critical attribute of confined subsystems is not that
they ensure privacy, but that they provide a conveniently structured
scoping for the containment of and recovery from errors.
> Isn't a similar level of carelessness also possible with capabilities?
It certainly is, but for a variety of reasons they tend to get in the
way less often. First, the scope of an error's impact in a capbility
system is more tightly constrained, because you do not tend to proceed
by linking 50 libraries together each of which is 95% tested. Second,
every act invoking an authority names the authority under which it acts.
Modifications made by
3. readthrough for sanity
therefore tend more often to see the authority error directly because it
is staring you in the face. This is certainly not true of set[e]uid()
and friends -- you may be deep in some function in some library and not
know that you are acting under a protection domain established far up
the stack. Think about how you could screw yourself with
(unwind-protect ...) in lisp if you used it for authority management.
I grant that none of these claims for greater reliability are
mathematically demonstrable, but they do seem to be substantiated by
> I can't help wondering whether the supposed advantage of capability
> systems with regard to security flaws isn't due more to the unusually
> high caliber of the people developing this software than to the inherent
> superiority of the methodology.
Flattering, but I think it's unlikely.
> As the saying goes, you can't
> make any system foolproof because fools are so ingenious.
Ah, but with capabilities we can build padded rooms and safe places for
fools to play.