[E-Lang] MintMaker with ACLs

Marc Stiegler marcs@skyhunter.com
Fri, 9 Feb 2001 17:28:14 -0700

I wrote this several days ago in response to email by Hal. I saved it in my
draft folder for a quality check before sending it out, and over the course
of the next few hours my memories altered, and I thought I had actually sent
it out :-)

It has enough potentially controversial extrapolations that, upon
re-reading, I thought it was worth posting even though it is late :-)


> This attack does not exist in an ACL system.  If we had a world built
> on capability systems, wouldn't the pages of CERT advisories be full of
> cases where excessively powerful capabilities are being passed around?
> I can't help wondering whether the supposed advantage of capability
> systems with regard to security flaws isn't due more to the unusually
> high caliber of the people developing this software than to the inherent
> superiority of the methodology.  Once you have the same sort of people
> using capabilities who make the colossal blunders we read about, won't
> they find new ways of making mistakes?  As the saying goes, you can't
> make any system foolproof because fools are so ingenious.

Uh oh, we've been hoist on the petard of our own genius :-)

This is a seriously hard question. In a capability world the CERT advisories
would certainly be about excessively powerful capabilities being passed
around. The interesting questions would be these:

--has the number of advisories gone down?
--has the median alarmingness of the advisories gone down?
--how quickly do the advisories become obsolete, i.e., how quickly are
problems fixed?

Assessment of these questions in a capability world can, alas, only be made
given a world of capability systems. So a claim that the situation would be
the same, while serious and important, cannot be proven counterfactual.  So
I am going to nibble at it from several different directions and see if I
can convince you that there are good reasons for thinking that the situation
would be better than this.

First of all, let me point out an opportunity for asymmetry here. CERT
advisories are very real-world.  I consider for this purpose it should be
fair to compare current and near-future ACL systems to current and
near-future capability systems. If we postulate a concrete world of E and
EROS versus a concrete world of C++ and FreeBSD, we have a case for
believing it is a better world simply because E and EROS are "better": E and
EROS express capability security more correctly than FreeBSD and C++ express
ACL security. This is a reason for optimism regardless of the
theoretical-ACL versus theoretical-capability discussion. This asymmetry
could reasonably impact all three questions: quantity, quality, and speed of
obsolescence of advisories.

Second of all, POLA in the context of a capability-secure language like E is
not merely a matter of good security design, it is more fundamentally a
matter of good object design: part of modular architecture is a call for
handing out small objects with limited scope, both as a bug-prevention
strategy and a readability strategy. POLA and modular design, in an E
context, are mutually reinforcing. My own experiences clearly suggest this
nice-sounding idea is true in practice, true to the extent that in E in a
Walnut I state, "clean architecture in a capability secure infrastructure
makes its own luck". Security is still not free, it does not arise only
because you had a clean architecture. But it can be surprisingly
inexpensive. We could reasonably expect this factor to reduce the quantity
of advisories, but even more strongly, the resulting ubiquity of POLA could
reasonably have a very strong effect on the quality (alarmingness) of the

Third, if we buy the belief that capability systems are more extensible than
ACL systems (which we have not yet bought, but to the extent one buys it
later, one can come back and accept these points), this has ramifications at
every level of development. In a 2 man team, where person A is using a
subsystem built by person B, we find that in a capability system both people
are in a position to enhance the security of the overall system by
refactoring the subsystem's visible interface into more disciplined
components. So the overall security of the result will reflect the quality
of the more competent person (A or B) rather than the quality of person B.
On the other extreme of development, if a deployed system is broken, you
have a better chance of being able to hire a third party, or buy a package
from a third party, that extends and wraps the base system in a more secure
fashion. This second characteristic would lead to more rapid repair of
problems, obsoleting advisories at a higher speed, since there are more
competing vendors in a position to fix the problem.

Anyway, I've thrown down a bunch of reasons for optimism, and look forward
some day to having concrete worlds to compare :-)