[cap-talk] Examples where ACLs are a better solution than capabilities
David Chizmadia (JHU)
chiz at cs.jhu.edu
Mon Oct 8 14:39:41 EDT 2007
Mark, et al:
Mark Miller wrote:
> Say you have a large installed base of technology, infrastructure,
> thinking, habits, superstition, process, emergency response teams,
> etc, all built up around ACLs. Say you want to see this whole system
> of people and software eventually switch to capabilities. Compare two
> 1) You say: "To get the benefits of capabilities, you must stop using
> ACLs. Write off all those sunk costs. They're wasted anyway. Stop
> throwing good bits after bad ones."
> "But how do we know we'll be safe in this untried brave new world you offer?"
> "<complex answers rooted in theory and history, both of which needs
> unusual interpretation>"
> "How do we maintain our safety during the switch from ACLs to capabilities?"
> 2) You say: "Your ACL systems aren't providing adequate safety. To
> gain the benefits of capabilities, phase them in as additional checks
> that must be passed. Over time, if you see that most of your safety
> comes only from the capability checks, then you can phase out the ACL
> restrictions. If you're not satisfied that capabilities by themselves
> are better, you can keep your ACL checks."
> The result of strategy #2, if successful, will be messier than the
> results of strategy #1, if successful. If we're confident that both
> strategies would succeed, we should chose #1. But...
> Remember that attempts to get C programmers switch directly from C to
> objects repeatedly failed. What succeeded instead took two steps: a)
> Getting C programmers to use objects in addition to C, making C++. b)
> Relieving C++ programmers of the bone crushing complexity of C++, by
> removing from it the C crap, making Java.
> In other words
> The other 90% of the work is always migration strategy
I actually advocate a third approach, which capitalizes on the
same programmer laziness that your C++ example illustrates...
Start with the assumptions that there is a real emerging
economic incentive to build demonstrably "secure" systems (currently
unvalidated, but there are indications that that it may be on the
way to becoming true) and that ocap architectures do in fact lead to
more easily demonstrated OS, network, and application "security". (I
am unaware of any documented controlled studies of the relative ease
of validating the "security" characteristics of ocap vs ambient
authority (amau) architectures that would let me treat the latter
point as more than an assumption.)
I would then start the transition by developing robustly
designed and documented ocap middleware* that runs on current OS and
includes emulation of existing amau APIs. Existing amau applications
and network services are then relinked to the emulation layer of the
middleware in order to capitalize on the more demonstrable
"security" that it provides. With the ocap middleware native APIs
available to programmers - and presumably both better performing and
more aligned with the OO languages on common use - we would start to
see new services and applications being developed against the native
Simultaneously, the emerging robustly designed and documented
CapOSes would have to adopt the OCap middleware native APIs as their
own native APIs, which would provide them with immediate access to a
rich collection of the services and applications that truly cause an
OS to be adopted.
With a complete chain of demonstrable security running from
hardware all the way up to applications, both the "Build Security
In" and the OCap communities would be able to declare success and
get on with the business of figuring out how to mitigate the truly
frightening residual risks...
* In my utopian meditations, I call this ocap middleware POLEAXE,
for Principle Of LEast Authority eXecution Environment :-D
More information about the cap-talk