[cap-talk] Articulating Reliance Assumptions (was: Capability levels - transparent network extension, no encryption)

Mark S. Miller markm at cs.jhu.edu
Sun Aug 20 15:09:52 CDT 2006


Jed at Webstart wrote:
> I think I see where my confusion is.  I think I misunderstood what 
> you meant by "mutually non-reliant machines" plugged into the 
> Internet - though in retrospect I think your wording is clear.  I was 
> focusing on the network as the "non reliant" hardware that I was 
> trying to extend over, but assuming that the machines at both ends 
> (what I refer to as the CCSs) can be depended on.

Yes. In general, participants in security discussion make many unstated 
reliance assumptions -- assumptions about what entities may rely on what other 
entities. A distressing number of difficulties communicating about security 
are based on such diverging unstated assumptions. These can be hard to 
uncover. But until these reliance relationships are made adequately explicit, 
it can be hard to figure out what we're actually disagreeing about.

I feel like our field has still only just scratched the surface on developing 
terminology and distinctions for being more explicit about reliance 
relationships. But we are making progress!


> However, tying this back to the source, this is the quote from your 
> thesis that I was concerned with:
> _______________________________________________________________
> "11.5 The Limits of Decentralized Access Control
> 
> This dissertation mentions three forms of capability system: 
> operating systems like DVH,
> programming languages like the non-distributed subset of E, and 
> cryptographic capability
> protocols like Pluribus. The object-capability model we have 
> presented applies only to the
> first two: operating systems and programming languages. Our model 
> implicitly assumes a
> mutually relied upon platform, which is therefore a central point of 
> failure. In the absence
> of this assumption, we are left only with cryptographic enforcement 
> mechanisms. Cryptog-
> raphy by itself can only enforce weaker access control properties. In 
> exchange, cryptography
> enables us to build systems with no central points of failure."
> _________________________________________________________________
> 
> Perhaps you can see where my confusion came from.  You say "Our model 
> implicitly assumes a mutually relied upon platform [e.g. as you seem 
> to suggest above the two CCS systems that mutually rely on each 
> other], which is therefore a central point of failure."  Why is it 
> that a "mutually relied upon platform" implies a single point of 
> failure?

It follows from the meanings of "relied upon", "central point of failure", and 
"platform".

First, I should double check that you find my usage of "central point of 
failure" acceptable:

 From section 5.2:
# Given a set of objects, anything within the reliance sets of all of them is
# a central point of failure for that set. A platform is a central point of
# failure for the set of all possible programs running on that platform.

In other words, if Alice and Bob both rely on Carol, than Carol is a central 
point of failure for the set {Alice, Bob}. Everything in the set is vulnerable 
to Carol's misbehavior. Even if Alice and Bob are correct, if Carol 
misbehaves, then they may too.

Let's say that Alice and Albert are objects hosted on CCS platform A, and that 
Bob and Betty are objects hosted on CCS platform B. Alice and Albert 
necessarily rely on platform A and Bob and Betty necessarily rely on platform 
B. If A relies on B, then B is a central point of failure for the set of 
everything we're here talking about.

OTOH, if A and B are mutually non reliant, but Alice relies on Bob, then Alice 
also relies on B, but Albert does not. Mutual reliance among *platforms* 
creates central points of failures and should be avoided. However, among 
object running on mutually defensive platforms, it is normal for 
application-level decisions of some of these objects to rely on some of these 
other object, and thereby to rely on their platforms. This is the necessary 
price for many patterns of cooperation. The distributed confinement examples 
of 11.5.1 and http://www.erights.org/elib/capability/dist-confine.html both 
involve application-level choices to rely on other platforms. These scenarios 
are non-transparently different from simple object-capability confinement 
since the need for these choices must be visible.


> Remember our discussion about distributed hardware (no 
> single point of failure) that can reliably process network addresses 
> and how something like a DCCS can be build on such an infrastructure.
> 
> I wonder if I'm not somehow also misunderstanding the above.  The 
> specific issue that I don't understand is how cryptographic systems 
> can achieve any more fault tolerance than non cryptographic systems - 
> or visa versa, how non-cryptographic system are necessarily less 
> fault tolerant.

How does a non-crypto system assure the authenticity of network addresses? In 
a DCCS system, how do we know that other machines haven't been plugged in to 
the wires in question?

If the answer is "we guard physical entry into the building which houses all 
the wires and machines", then the building and its guards are a central point 
of failure. If the answer is instead "we assume that no one will build a NIC 
card that will lie about its MAC address," then each NIC card is a central 
point of failure for the entire system.

Decentralized crypto scenarios also make implicit assumptions regarding 
physical access. But these assumptions are free of central points of failure. 
Those relying on platform A rely on everyone with physical access (and ability 
to tamper with) platform A. And similarly with platform B. But there doesn't 
need to be anyone with physical access to both.


   From my perspective (e.g. consider a "cryptographic
> system" like Amoeba or NLTSS vs. a non-cryptographic system like the 
> DCCS) the fault tolerances issues are the same.  A failure in a 
> component CCS takes down that part of the system and anything that 
> depends on it, but not the whole distributed system.

Depending on whether the faulty component can allow an attacker to impersonate 
arbitrary other network addresses.


> Similarly a 
> failure in a component of a cryptographic distributed capability 
> system takes down anything that depends on that component and not the 
> whole distributed system.

Assuming the standard public key scenario for crypto caps, e.g., as used by 
your later work or by YURLs, then a single faulty component will still find it 
infeasible to impersonate arbitrary other platforms -- because it doesn't know 
their private key.

If a possible fault could give the faulty component arbitrarily strong 
guessing powers, then each component would indeed be a central point of 
failure for the system as a whole. The crypto algorithms and protocols 
themselves are often truly central points of failure for crypto systems. If a 
fault in one of these is discovered by future cryptanalysis, this does make 
everyone vulnerable. My statements about crypto caps lacking central points of 
failures depends on my assuming away these possibilities, as in section 7.1:

# Pluribus relies on the standard cryptographic assumptions that large random
# numbers are not feasibly guessable, and that well-accepted algorithms are
# immune to feasible cryptanalysis.


> Can you clarify (perhaps only for me - 
> apologies if so) what you mean in the above?  Does this issue come 
> back to our discussion of an "open" (can't depend on network 
> addresses) network vs. a network where network addresses can be 
> trusted - but still with no central point of failure?

Yes. Specifically, the issue comes down to my assuming away the possibility of 
both "a network where network addresses can be trusted [by means other than 
crypto]" and simultaneously "but still with no central point of failure". What 
arrangement do you have in mind that could provide both together?

Here's one I can imagine: The room housing platform A is guarded by A's owner, 
and likewise with B and C. The wire between A and B runs through corridor AB 
and is guarded by both A's owner and B's owner. Likewise with AC and BC. The 
building as a whole has no guards -- we only have guards on these individual 
rooms and corridors. If A is bad, B and C can still authenticate and 
communicate securely. This scenario does not use crypto and has no central 
points of failure. People have talked about using quantum entanglement to make 
wires tamper evident, much as with our guarded corridors above. Whether this 
is actually feasible is well beyond my expertise.

So I think I agree that you can have both in theory. However, I don't think 
such possibilities are very practical. If you want a decentralized system of 
virtual secure wires, just use crypto. Once you do, these can be multiplexed, 
tunneled, and made to span great distances. Without crypto, they can't.


> From my perspective a transparent network extension of a CCS such as 
> the DCCS could be implemented without cryptography (as was done in 
> the DCCS paper) or with cryptography (e.g. as described in the 
> Managing Domains paper - though in a strictly network context) and 
> the fault tolerance would be the same.  You seem to suggest that such 
> implementations must necessarily be different regarding single points 
> of failure.

For the above reason. But having walked through the above "guarded corridors" 
scenario, I drop the "necessarily" claim.

-- 
Text by me above is hereby placed in the public domain

     Cheers,
     --MarkM


More information about the cap-talk mailing list