[cap-talk] RE: OS security discussion, restricted access processes, etc. - b ack to basics - military classifications

Karp, Alan alan.karp at hp.com
Tue May 4 13:57:07 EDT 2004


> -----Original Message-----
> From: Jed Donnelley [mailto:jed at nersc.gov] 
> Sent: Monday, May 03, 2004 8:01 PM
> To: Karp, Alan; Capability Talk
> Subject: RE: OS security discussion, restricted access 
> processes, etc. - back to basics - military classifications
> 
> 
				(snip)
> 
> At this point I think we have a common understanding of the 
> mechanisms but 
> are struggling to get common terminology.  Of course at some 
> level all that 
> goes down a network cable is bits.  Above that level at some 
> point (say at 
> the presentation level or application level) those bits may 
> be interpreted 
> as an index (call it a descriptor if you like).

Yes, we're violently agreeing about concepts and confusing ourselves with terminology.  To me "capability as bits" means that anyone who presents the bits is granted the corresponding right.  That's the way Swiss numbers work for sturdy refs in E.  (Did I get that right?)  If the bits themselves can't be used by another party, then I'd call it something different.  

The e-speak Beta 2.2, which I refer to as Client Utility (CU) to avoid confusion with the product version, used designators.  These are indices into a CL specific to each process, as done by DCCS.  The E-speak product used SPKI attribute certificates tied to a private key, and you've described something similar by combining the bits of a capability with a private key.  I think these represent a third category that I don't have a good name for.  This category has the desirable properties that capabilities can be delegated without involving a third party and that learning the bits does an adversary no good.  In "Managing Domains" you describe yet an other system, in which there is an access list for each capability listing which processes may use it.

> 
> The idea of DCCS (as I'm sure we agree, just restating for 
> this terminology 
> discussion) was to make it appear that a descriptor based 
> capability was 
> available on one system on the network when the resource that 
> it pointed to 
> was serviced by another system on the network, with both 
> systems being 
> (identical) descriptor based capability systems.  The bits that were 
> communicated contained indexes into tables that you could refer to as 
> descriptors.  In that somewhat abstracted sense "descriptors" were 
> communicated.  At yet a higher level the intent is of course 
> to communicate 
> access rights (capabilities) in a generic sense - more the 
> subject of the 
> Managing Domains paper:
> 
> http://www.webstart.com/jed/papers/Managing-Domains/
> 

Yes, and that's what we did with CU.  A CU name is a capability designator that makes the right thing happen whether the target is local or remote.

				(snip)
> 
> The essence of an access list mechanism is that the resource server 
> remembers who (subject) has access to which (object) 
> resource.  When a 
> subject refers to a resource it can do so by any sort of 
> convenient name 
> (index, alphanumeric name, whatever) because the name isn't 
> what determines 
> whether access is granted but rather whether or not the 
> subject is listed 
> in the access list as having a right to the resource.  That's 
> exactly what 
> the DCCS mechanism is doing.  That's in fact what makes it so 
> the index 
> that's communicated across the network cannot be used as a right if 
> "discovered" (eves dropped, snopped, sniffed, whatever).

I see from Figure 4 of "Managing Domains" what you mean by "access list".  For each resource, you have a list of the processes that may exercise its capability.  That's different from a Capability List (CL) that prevents forgery by keeping a separate list for each process in the TCB.  CU used this approach.  Other versions keep the CL in the process address space and prevent forgery by cryptographic means.  You appear to have added another property to the classic definition by including a private key in the forgery prevention mechanism.

> 
> The only rights communicating mechanism that I know of that 
> is subject to 
> an eves dropping (or memory/dump snooping, looking over 
> shoulder, etc.) 
> threat is the pure password mechanism.

Agreed.

> 
> The Managing Domains paper (above) tried to stand back a bit 
> (further than 
> we did in our implementations actually) and look at the 
> possible protocols 
> that can be used on a network to communicate access rights.  
> I referred to 
> such generic access rights as "capabilities" in that paper 
> even if they 
> were communicated via an access list mechanism (as DCCS did).

I'm confused.  Looking at the DCCS and Managing Domains (MD) papers seems to show different systems.  Figure 4 of DCCS shows the process indexing into a CL specific to the process.  Figure 3 of MD shows an access list that enumerates the processes allowed to use a specific capability.  (The list isn't labeled, but that's what I assume it is.)  These seem quite different to me.  One has a table for each process listing the capabilities it has.  The other has a table for each capability listing the processes that can use it.  It's almost like rows versus columns of the access matrix.

> 
> In fact I think some of you might find it interesting to note 
> that the 
> Managing Domains paper points out a flaw in the DCCS 
> mechanism (without 
> pointing it out explicitly).  Namely when I discuss the "Reflection 
> Problem" there that the simple access list mechanism is 
> subject to, the 
> DCCS mechanism has that same flaw.  It is a bit more difficult to see 
> because it would require a system on the network that didn't play the 
> "game" according to the DCCS rules, but of course that's life on a 
> network.  It was certainly the intent of the DCCS mechanism 
> to protect 
> against such problems and it failed (though a relatively 
> minor fix would 
> correct the problem).  The problem arises when (in the DCCS 
> terminology) at 
> this point:
> 
> B must tell A, "I, who have access to your capability, want 
> to grant it to 
> host C." To do this, another message type, the "Give" 
> message, is used.
> 
> If the system B in fact doesn't have access (remotely) to the 
> capability 
> but C does, and B doesn't send such a "Give" message (which 
> in any case 
> would be rejected) but rather just sends a message to C 
> passing what it 
> claims is a right it owns (with a name that it would likely 
> have to snoop) 
> in, say, a store into a directory.  If the system B does have 
> the right to 
> the resource on A (even if not in the directory server), B 
> will be fooled 
> into putting that right where C asked.  C can then simply ask 
> for the right 
> back (e.g. fetch it from the directory) and it has illicitly 
> obtained the 
> right to the resource on A.

I'm sorry; I'm lost here.  Who is the attacker?  In the first sentence, you say that C has the right on A, but B does not.  In the last sentence, you say that C illicitly obtains the right on A.

Even though I didn't get it from the above, the section in MD does describe the problem.  As I understand it, Sneak discovers the bits in the capability by some underhanded means.  He (bad guys are always "he", aren't they?) then asks the Reflector to add him to the access list.  The Server honors this request, and Sneak gains access.  This attack seems to be related to Norm's confused deputy, but I'll let him comment on it.

> 
> I seem to recall discussing this problem with Charlie Landau 
> and/or Norm 
> Hardy at the time I was working on the Managing domains 
> paper.  In fact I 
> forget if that problem was noted in the previous paper:
> 
> J. E. Donnelley and J. G. Fletcher, Resource Access Control 
> in a Network 
> Operating System, Proceedings of the ACM Pacific '80 Conference, San 
> Francisco, November 1980, pp. 115-125.
> 
> that focused on these topics and, in particular, first shared 
> the public 
> key mechanism for communicating resource access that John 
> Fletcher and I 
> worked on together.
> 
> I'll be interested to know if you avoid that reflection 
> problem in your 
> mechanisms Alan.

Such an attack wasn't even expressible in CU.  A capability was sent from one party to another by sending it as a parameter in a message, as done in an object capability system.  The recipient would get an entry in its namespace that it could use to access the capability.  All messages were mediated by the TCB.  If Sneak used a name that didn't appear in its namespace, the TCB would reject the message.  Hence, it did Sneak no good to discover how some other process referred to a capability.

				(snip)
> 
> Exactly, an access list (though the "CU" reference escapes me).

Not an access list for each capability but a capability list for each connection.
 
				(snip)

> >CU did "proxying by default".  If Alice gave Bob a 
> capability, and Bob 
> >forwarded it to Carol, all of Carol's requests on that 
> capbility went 
> >through Bob.
> 
> Ugh.  That certainly doesn't scale.

Actually, it scaled quite nicely.  We did extensive measurments with 500-600 logical machines running on a cluster of 50 physical machines.  SpinCircuit was managing more than 5,000,000 resources with several hundred users at their peak.

In many ways brokering increased the scalability.  CU was connection based.  If each machine could handle a limited number of connections, say 100, then two levels of brokering allowed a single machine to provide access to users on 1,000,000 machines.  Latency, more than scalability, was the issue.  The Web E-speak interface used servlets running in Apache to handle 10s of thousands of e-speak users with a single connection to the logical machine.

				(snip)
> 
> As you say Alice gains no security.  Why set up a mechanism 
> (protocol) that 
> even introduces such a notion, let alone makes it the default?

There are many reasons, many coming from our concerns with enterprise systems.  

1. Bob may be a certified business partner, and Carol doesn't meet Alice's corporate requirements.  
2. Carol may not protect her machines well enough for Alice to risk a direct connection to her.
3. Carol may not have a payment scheme or credit rating that Alice will accept.
4. Bob may be inside Alice's firewall while Carol is outside.
5. Alice may be controlling the number of requests per second from Bob. 
6. Alice may not want to expend the resources for a new connection.

None of these is compelling by itself.  In fact, proxying by default fell out of the way we extended the model across the network.  We would have had to work hard to shorten by default.  In retrospect, proxying by default worked well for our customers.

> 
> >There were many reasons she might say no.  For example, she 
> might not have 
> >any ports available at the moment.
> 
> Ports?  Yet another place where capabilities as data 
> (remember, not just 
> passwords) eliminates a resource issue/complexity.

I mean TCP/IP ports.  We were connection based and set up a new process for each connection by default.  These heavyweight connections meant that controlling how many we'd accept was a consideration.

> 
> >DCCS appears to work like CU in that a capability is only 
> meaningful on a 
> >particular connection.
> 
> In DCCS the "name" that's passed across the network is an 
> index into a 
> table of capabilities supported for remote access.  For each such 
> capability in its table the server had to remember which hosts on the 
> network had the right to access the capability.

That's an access list, but Figure 4 of DCCS shows a table for a specific process.  That's a capabilitiy list.  CU also used a capability list.  Each client (process) of the core (TCB) had a capability list.  The core only had to know which list to look in. 

> 
				(snip)
> 
> As here I presume:
> 
> http://www.erights.org/elib/capability/duals/boebert.html
> 

Yes.  I did a search for Boebert on erights.org, but I must have misspelled it.

> >I have a hard copy that I can fax to you, but the argument 
> is simple.  If 
> >capabilities are bits, a "secret" process can write-up to a 
> "top secret" 
> >process the capability to write at the secret level.
> 
> As I read that paper (just now, it's quite short) it didn't 
> refer in any 
> way to the capability implementation.  It seems in fact to refer most 
> directly to traditional capabilities as descriptors 
> implementations.  It 
> does of course assume the ability to store a capability into 
> some other 
> "object" (e.g. a directory) which follows the military 
> classification rules.

I believe it is describing capability as bits, at least as I use the phrase.  (See earlier in this note.)

> 
> It's been a while since I've though much about such 
> traditional military 
> protection level mechanisms (unclassified, secret, top 
> secret, etc.).  We 
> of course implemented such mechanisms in the capabilities as 
> data system 
> that we ran at LLNL for many years (NLTSS: 
> http://www.webstart.com/jed/papers/Components/ ).  The way 
> this was handled 
> in that system was that the military protection level was a 
> property of 
> data (bits).  If a process was running at a level (e.g. 
> secret) and it 
> tried to send a message to a process running at a lower level 
> it wasn't 
> allowed to do so by the network (message system).  If a 
> process running at, 
> say secret, tried to write data into a file labeled 
> unclassified, the file 
> server (which ran at a very high level so that it would 
> receive messages 
> from any process) would refuse the request.  So in the 
> scenario that Mr. 
> Boebert describes on our system, while a potentially 
> malicious program 
> acting on behalf of a user with low clearance could certainly:
> 
> 1.  store a RW capability to an unclassified file into an 
> unclassified 
> directory, and
> 
> 2.  the Trojan horse running in the high level process (e.g. 
> secret) could 
> read out the RW capability to the unclassified file from the 
> unclassified 
> directory, but
> 
> 3X. the high level process would be blocked by the file 
> server from writing 
> data into the file (even though the capability is RW) because 
> the data is 
> secret and the file is unclassified.
> 
> Here is another case where I think the model is the problem.

Yes, this works, but Boebert is talking about an "unmodified capability machine", one that just honors requests if the capability is presented.  A "capability as descriptor" system, or one like you describe that combines the bits of the capability with a private key, avoids this problem because the top secret process can't use the written-up bits as a capability.

				(snip)
> 
> Are you referring to one way communication?  That is the "I" 
> above can send 
> to the "you" above but "I" can't receive from "you"?  I admit 
> we considered 
> such scenarios so obscure (imagine efforts at one way 
> communication on a 
> network.  I can send the data out but never get an 
> acknowledgement back) 
> that we didn't take them seriously (though they did come up 
> in my early 
> arguments, very much like they are coming up here).  I admit 
> it's a bit 
> difficult to proxy a rights access over a one way channel.  
> However, it 
> certainly isn't difficult to violate some rights intent over such a 
> channel.  E.g. if Alice gave the "I" above a capability to a 
> file, "I" can 
> certainly read out all the bits and send them to "you" in 
> violations of 
> Alice's intent of not allowing "I" to share that read-only 
> file with "you".

Sorry for the funky "I" and "you".  It's a bad pattern I slip into.

You're right.  A one-way, outgoing channel can be used to send secrets, but it can't be used to proxy because it can't receive requests.  However, "capabilities as bits" as I use the term can be leaked over such a channel.  If a covert channel is low bandwidth, Alice may not be able to transmit very many secrets.  However, it's a problem if capabilities are bits that she can transfer to someone else who can then use them over an overt channel to get lots of secrets.

> 
				(snip)
>
> I'm sorry, but I don't understand the above example.  While 
> with a bit of a 
> stretch I can imagine how a capability could represent an 
> access right to a 
> can of tuna without itself granting the right to open the 
> can.  I find it 
> difficult to imagine a capability representing access to a 
> can opener that 
> could be used to open the tuna can.  Perhaps my difficulty 
> arises from the 
> fact that a can of tuna and a can opener are both physical 
> objects.  E.g. 
> they could be located on opposite sides of the globe.  If my 
> can of tuna 
> capability gives me rights to do things to the can of tuna (e.g. 
> communicating with the tuna can server) and my right to the 
> can opener 
> gives me rights to the can opener (e.g. communicating with 
> the can opener 
> server), I still find it a stretch that I could make a 
> request of either 
> (e.g. passing the right to the other) to get the can of tuna opened.
> 
> In any case I don't see how proxying would make any 
> difference.  Perhaps 
> you could elaborate on this example so I can understand it better?

I guess I'm too cute for my own good.  I was just trying to give an example of rights amplification.  Let's take a more concrete example, "combine foo bar".  Alice has access to foo and qux; Bob has access to bar and qux.  Alice can combine foo and quz; Bob can combine bar and qux.  Neither can combine foo and bar.  If capabilities are specific to a process by some means, either because they are designators or because they are tied to a private key inaccessible by the process, then Carol can't combine foo and bar even if both Alice and Bob proxy for her.  If capabilities are just bits not tied to a process, then Alice can tranfer the right to foo to Carol, and Bob can transfer the right to bar to Carol.  Carol can then combine foo and bar.

> 
> --Jed http://www.nersc.gov/~jed/  
> 

I want to thank you for this discussion.  I haven't had to think this hard in quite a while.

________________________
Alan Karp
Principal Scientist
Technical Computing Research Group
Hewlett-Packard Laboratories
1501 Page Mill Road
Palo Alto, CA 94304
(650) 857-3967, fax (650) 857-7029
https://ecardfile.com/id/Alan_Karp
http://www.hpl.hp.com/personal/Alan_Karp


-------------- next part --------------
A non-text attachment was scrubbed...
Name: Alan H Karp.vcf
Type: application/octet-stream
Size: 774 bytes
Desc: not available
Url : http://www.eros-os.org/pipermail/cap-talk/attachments/20040504/db1ca22d/AlanHKarp.obj


More information about the cap-talk mailing list