[cap-talk] Re: "capabilities" as data vs. as descriptors - OS security discussion, restricted access processes, etc.

Jed Donnelley jed at nersc.gov
Thu Apr 29 17:55:26 EDT 2004


<this message is a continuation of a thread that was started off the cap-talk
list.  I hope that the earlier context will be posted to the archive soon>

At 03:54 AM 4/29/2004, Jonathan S. Shapiro wrote:
>On Wed, 2004-04-28 at 12:30, Jed Donnelley wrote:
>
> > However, if a subject is a process then I certainly think
> > the notion (as I understand it above) of "safety" can be enforced in an
> > access list based system.  Namely, the server of the resource (who
> > knows who has access) can simply deny any request to add a process
> > to an access list unless the requesting process is on the access list.
> >
> > Not so?  Am I missing something there?
>
>Yes. You are ignoring the 'own' right. One of the defining
>characteristics of an ACL system is that the holder of the own right can
>arbitrarily assign rights to others. Any server that doesn't honor this
>isn't implementing the ACL model.

And presumably a process without the "own" access right is not allowed
to assign the right to others - directly.  This property violates what
I and my colleagues termed the "inalienable right to communicate access":

http://www.webstart.com/jed/papers/Managing-Domains/#s6

In my opinion such an effort at a "right" is nonsense.  It simply
forces processes who wish to share rights to do so inefficiently by proxy.
I argue that any such effort is counterproductive.

If the only additional value in the capability model was something to
do with more effectively limiting the ability to communicate owned
rights then I would see negligible additional value in the capability
model.  For example, I consider this thread:

http://www.eros-os.org/pipermail/cap-talk/2003-December/001522.html

nonsensical.   However, the capability model provides much more than
any putative cleanliness in providing a false sense of security when
trying to stop processes from doing something that they can do anyway.
I think some of the value in the capability model can be seen by comparing
the access list mechanisms as they can be implemented in a network
(e.g.: http://www.webstart.com/jed/papers/Managing-Domains/#s10 )
and the various awkward problems that it gets into vs. the more purely
"capability" models - some for of which would have to be used to extend
a descriptor based capability system to a network anyway.  Beyond that
I admit that I prefer to have the rights that a process has what to me is
more clearly located with the process than spread among all the servers
and tied to a "subject" notion - that seems inevitable to get confused between
human rights and process rights.

>Yes, we might adapt the ACL model in any number of ways to repair this
>sort of problem, but the result is no longer the ACL model. I'm not
>saying that changing the model is bad. I'm only saying that we shouldn't
>be confused about what to call it.

I have no desire to try to adapt the ACL model.  My desire (as stated
again and again in this thread) is to address the user interface issue.
As noted previously I believe access rights models have been debated,
worked and reworked, mulled over, published, tweaked, etc., etc., etc.
nearly since the beginning of computing.  All that has done nearly nothing
to clean up the access rights model for people or even for processes in
mainstream systems.

What I would most like to see are two steps that I would view as positive:

1.  Definition of what you might call an Internet capability model.  This
could be something along the lines of:

http://www.webstart.com/jed/papers/Managing-Domains/#s13

though I think modern encryption technology would suggest a
rework.  The basic idea would be to define a protocol for sending
blocks of bits that:

   a.  Can securely represent the right to do anything that a service
        (server) process might chose to make available.

   b.  Can be communicated securely - hopefully without contacting
        the service process except of course when it is the source or
        destination of the rights communication directly.

   c.  Is safe from evesdropping.  That is, the form that the capability takes
        when it's in, say, a processes memory space or in an email message,
        cannot be used by any entity other than the owner of the memory
        space (a process) or the email (presumably a person).

   d.  Extra points for including a rights reduction mechanism that doesn't
        require permission from the server.

Even such a mechanism just for limiting access to Web URLs I think
would be wonderful!  Can you imagine the value of being able to (essentially)
just copy and paste a "capability" into, say, an email and communicate
not just information, but also a right?  Even that much (avoiding all the
crusty to the max password and other such access control mechanisms)
seems almost beyond my wildest dreams.

I believe a mechanism like #1 would prove useful in its own right and might
help to encourage similar mechanisms down at the process level.
However, a mechanism like #1 would not, even if available at the process
level (like we had in our NLTSS system:

http://www.webstart.com/jed/papers/Components/

solve the problem of getting principal of least privilege access control
into mainstream computing.  To do that there is also a requirement for:

2.  A mechanism to allow people (users) to manage which of their rights
they want (choose) to share with the processes that act on their behalf.

It is this second mechanism that was the initial motivation for starting this
thread.  I believe the user interface issue has been almost entirely neglected,
even in systems with the internal mechanisms available to support principle
of least privilege access control.  Such POLA control is needed to make any
progress at all in limiting the problem of Trojan horses (e.g. people executing
email attachments or things like macros for Word,etc. files or even local
Web browser remotely downloaded executables <though of course I
follow all the "sandbox" mechanisms, I largely discount them>).  It
requires some sort of internal mechanism in the system to limit
the rights of running processes to other than just the rights of a
"user".  I believe an internal capability mechanism is the appropriate
way to do such restrictions, but I also believe that the user interface
issue is largely independent of the mechanism to restrict the rights
that a process has - so I try to stay agnostic on the internal rights
restriction mechanisms in promoting #2.  I do believe that any effort
to implement #2 will naturally drive people to an internal capability
model.  For example, I don't think a mechanism like chroot is adequate
for any reasonable user interface implementation.

In some sense I see getting to a POLA environment in mainstrean
systems as a chicken and egg problem.  Without a suitable
rights restriction mechanism the user interface isn't needed or of value.
However, without an effective user interface a rights restriction mechanism
is also of very limited (internal and therefore largely invisible to users) 
value.

I'm hoping that by getting some work done and made visible (e.g. published)
on a user interface (e.g. it could be for an existing capability system, but
it should be reasonably compatible with some mainstream user interface)
that some impetus would be provided to fold suitable rights limitation
mechanisms into mainstream OSs.  Such a mechanism could even
be based on #1, though of course a fair amount of library work would
have to be done to make it work (especially efficiently with access to
local resources).

> > We have had systems that support seemingly effective restricted
> > access mechanisms (albeit not mainstream) for many years.
>
>I disagree. We had allegations about such systems, but we didn't
>actually know whether these allegations were true. In many cases the
>systems themselves had impractically bad performance.

I can't speak for too many systems, but the mechanisms in
NLTSS (http://www.webstart.com/jed/papers/Components/ )
were apparently effective as that system ran in production for
6-8 years at LLNL (though we did do a fair amount of performance
optimization to get it into production).  I don't seem to recall
hearing that performance was a limiting factor with KeyKOS -
though perhaps Norm or Charlie should speak for that system.
Even with the RATS system:

[Lan75]  C. R. Landau, The RATS Operating System, Lawrence Livermore 
Laboratory, Report UCRL-77378 (1975).

(the first capability system I worked on) I don't recall performance being 
a substantive issue.

> > However, I haven't yet seen even an effort to develop
> > a user interface suitable for such systems.  Do you feel that is
> > something that still needs to wait?  Wait for what?
>
>Jed, your comments on user interfaces mostly convince me that you
>haven't thought enough about user interfaces and/or you aren't up to
>speed on what has been done already.

Great!  If there is progress in this area already that I am unaware
of I'd be delighted to learn about it and perhaps support it for further
work.

>Let me suggest that you get a copy
>of the DarpaBrowser from MarkM and MarcS and play with it for a few
>minutes.

I haven't yet found the browser itself, but I did take time to read
this paper on the topic:

http://www.combex.com/papers/darpa-report/darpaBrowserFinalReport.pdf

There is a lot of good stuff in there, in my opinion.  In fact, from that
paper, this:

"Powerbox module that manages authority grant and revocation on behalf of a
confined application. The powerbox launches the app, conveys the authorities
endowed at installation, and negotiates with the user on the application's 
behalf
for additional authorities during execution."

sounds almost exactly like what I am looking for.

Regarding the browser application, I think there may be aspects of
it that seem to me a bit over done.  E.g.,

'One of the security goals was to prevent the renderer from displaying any
page other than the current page specified by the browser. While capability
confinement was trivially able to prevent the browser from going out and
getting URLs of its own choice, there was one avenue of page acquisition
that required slightly more sophistication to turn off: if the renderer were
allowed to have a memory, it could show information from a previously-displayed
page instead of the new one. Therefore, the renderer had to be
made "memoryless".'

seems a bit obscure to me and potentially distracting from the main prize.

I don't care if my browser accesses URLs that I don't specify to it.
I want it to be easily able to link through pages that it fetches to
other pages.  I even value its ability to cache old content.  I don't even
care if it sends old content to new URLs (though some might).  I
see effort spent in that area as obscuring what I feel should be the
main focus.  Namely stopping the browser (or any other application
I run) from being able to access anything not needed for its work.

I feel a Web browser needs to be able to access the Internet
and it needs to be able to display what I ask of it.  It is at those
few times where it needs more access (e.g. to save a file to my
file system) that it seems to me the interesting issues come
in (that I've called rights extension earlier in this thread).

This text from the above document:

...The goal was to prevent the renderer from gaining authorities, such as
the power to reset the clock or delete files...

seems about right to me.  I don't understand how that goal was interpreted
to include preventing the browser from fetching URLs of its choice
or accessing previously rendered content - though I admit I'm getting
into the area of nits here.

>Set aside, for a moment, whether the browser is an interesting
>application (it is, but MarcS will respond on that point separately).
>Ask yourself only whether the UI they built is a sensible user UI. Then
>ask yourself how much retraining you think your neighbor at work would
>need to use it.
>
>I believe that it *is* a reasonable UI.
>
>The problem is not that we need to redesign UIs. There is actually a lot
>of directly applicable stuff in current UIs.

Current widely used UIs?  If the UIs that you refer to are not widely
used then it would seem that one problem would be selling them
and getting them widely used.

>Yes, we will tweak here and
>there. The real problem is simply to use what we have.

What you mean "we" pale face?  I see it as a quite significant challenge
"selling" any potentially needed UI change and getting it into widespread use.

> > Firstly, I would like to distinguish a class of programs - namely those
> > that initialize or otherwise communicate rights to other programs
> > at the level of process initiation.  I believe that today this is, 
> fortunately,
> > a rather limited set of programs/code.  This is the area where I
> > believe focus should be, at least initially, directed.
> >
> > For these programs we certainly do need some sorts of tools to allow
> > them to specify the rights granted to the programs they initialize and
> > to specify any rights that should be granted to programs that "request"
> > them.
>
>I used to believe this, and I no longer do. The problem is that it is
>too complicated to be manageable.
>
>In reality, the only practical way out of the box is to provide newly
>instantiated programs with a standard environment. This environment
>should consist of mediating agents. Thus, MS Word has access to my whole
>file system, but only by way of my agent that implements the Open dialog
>box. I have to agree to let it open a file.

I think that's fine.  From my perspective you have shifted all the
granting of access rights to what I referred to as "rights extension".
That is, I would say that MS Word doesn't actually have access
to any files (if I understand you above).  It only has the right to ask
(through the Open dialog box) for access to files.

Are we getting closer on terminology?  With the above exchange I
finally start to feel that this discussion is getting somewhere.
With a typically understood UI (e.g. Unix or Windows) an initialized
process can open a file without going through an open file dialog
box - much less one that is in some sense external to the process
(at least possessing additional rights).

If, for example, MS Word wanted to write some hundreds of
files as a result of one command (e.g. a command to generate
HTML for a Word document), how would you grant it that right?
I think I would have it ask for write (insert) access to a directory,
but I would be interested to hear your view.

>Similar patterns apply in most other cases.
>
>These cases aside, the answer to all other requests for authority is
>"no".

It seems to me that our views on this are pretty close - except that
I think I see a lot (!) of selling as being required before such UI mechanisms
have any significant practical impact.

Also, I'm not ready to give up on command line interfaces (with the seeming
continued value that they provide for scripting, etc.).  With a command line
interface a "line" of text is interpreted.  Also, the interpretation of the
text line may be automated.  How do you propose to deal with that situation
and still restrict the initialized process?

> >...even in capability systems
> > I'm not aware of mechanisms that make it "natural" for people sitting
> > at workstations to initialize applications with just the rights they need
> > to perform the desired task.  Are you?
>
>Yes. Go try the DarpaBrowser.

If I'm understanding the terminology there perhaps I should focus more
(initially at least) on what they refer to as "CapDesk"?  Isn't that more
analogous to what I'm looking for?  Now that I understand what they are
trying to protect from in terms of a Web browser "renderer" I think I can
better appreciate the problem they are trying to solve.  One aspect of that
interface, however, is that it seems pretty much free of user interaction (I'm
starting to speculate quite a bit).  I'd like to focus where users actually do
specify rights restrictions (either implicitly with command syntax or
explicitly - e.g. as with the file open box that you referred to above).

I find it appalling that Unix programmers still manage rights access by
setting bits governing file/directory access by other users/groups.  With
Windows it is at least as bad - but at least I'm not exposed to it...

>Security is as much about reinforcing and appropriately constraining
>programmer behavior as it is about protection systems. It's not enough
>to have a protection model that works. You have to get it used
>effectively.
>
>In answer to your question, my problem rests on the observation that if
>a programmer can violate an abstraction boundary they will do so in the
>interests of optimization or ease of development. For this reason, it is
>important to enforce type safety on capabilities. Capabilities as data
>do not enforce type safety.

Why do you say so?  I see the issue of "type safety" as a language
issue internal to a process.  It seems to me that capabilities as data
can be safely typed as easily (at least - I would say more easily in that
capabilities as data are more comparable to other types that languages
typically protect) as can capabilities as descriptors.  I'm not trying to
say that enforcing (in some sense anyway) type safety isn't of value,
(though we might disagree about how much value), but I see it as
qualitatively different from protection between domains.

> > From the viewpoint of the programmer they seem identical to me.
>
>You (and I) can probably enforce the necessary discipline on ourselves
>in both approaches. Your average Visual Basic programmer cannot. The
>problem is that things like email agents are (in the real world) written
>by average Visual Basic programmers. Simplification of kernel and
>network interfaces isn't even on the radar relative to this as an
>important issue.

You and others can spend your time working on Visual Basic programmers
of email agents if you wish.  I prefer to focus my attention on the exchange
of rights between separate process domains - whether processes within
a single computer system, between processes on separate computer
systems on a network, or even between people or people and processes.

> > >   2. Take steps that reduce the likelihood that your program will
> > >      get compromised, regardless of how much authority it holds.
> >
> > Again perhaps this is semantic, but when you say "that your program
> > will get compromised" it sounds to me like you are protecting it from
> > external attack.  Is that right?  Attack from who or what?  Do you mean
> > to protect it from other programs (processes) that it communicates
> > with?  In that sense I am with you.
>
>Certainly. Also attack from the network or from content that it
>interprets. If you look at real security vulnerabilities today, 85%+ are
>buffer overruns, and the majority of the rest are scripting errors.

Good point.  Isn't it sad that the state of programming practice
today is such that so many such vulnerabilities continue to show up?

Still, I wonder where you get your 85% number?  I was under the
impression that a significant fraction of system compromises were
from things like people executing email attachments (viruses).  I
certainly know anecdotally that among people I interact with (friends
and colleagues) such Trojan horse compromises are the most common
type of system vulnerability.  Do you believe that eliminating such
compromises would "only" reduce the number of systems compromises
(not system network vulnerabilities) by 15%?

> > >The key point on which we disagree is the practical feasibility of doing
> > >this in any current popular system API.
> >
> > You seem to be focusing on the programming interface while I am
> > focusing on the human interface...
>
>Yes. In part, I mostly think that the human interface problem is easier,
>because current interfaces are closer. But the bigger issue is that
>human interfaces only concern what a program *alleges* to do, where
>security vulnerability must be concerned with what a program *in
>principal* can do. I don't see human error as the driving source of
>security mistakes in the world today.

It isn't human error when somebody executes an email attachment.
That's exactly what they intended to do.  The problem is that they have
no way to effectively confine what the application can do!  Every time
I see such an attachment I face the dilemma of whether to execute it.
Of course I never do these days, but I would like to be able to
do so safely.  Wouldn't you?  Don't you see that as an important issue?
How is that supported in the safe UIs that you say currently exist?

>The driving source, in my view, is
>crap design facilitated by crap development facilitated by crap APIs.

I think we can agree on that much ;-)  I guess the question is where
to put our effort in trying to improve the situation.  In my view most
people (even many advanced computer people) are unable to even
conceive of mechanisms to limit the rights of processes that they
initialize beyond giving such processes all the rights that they as users
have.  Until we get past this mind set I don't believe any significant
progress will be made.  I believe that even just by showing people an
interface through which they could safely execute email attachments
some progress could be made.

> > It's true that I do believe that most
> > aspects of the programming interface (API) for today's most popular
> > systems (Windows, Unix) can be made to work (awkwardly I admit)
> > so as to provide principal of least privilege access control.
>
>Then I want to meet your drug dealer, and I want to know which rock
>you've been living under for the last decade. The *overwhelming*
>evidence based on real world events is that this just isn't true in any
>practical sense.

Hmmm.  After the discussion above I'm a bit surprised to hear you
think we are so far apart on this.  When you suggested above that
programs could just be initialized with essentially no access rights
and ask for such rights (e.g. through an open file dialog box), that
sounds to me like a UI preserving approach.  I think the command
line interface may be substantially more difficult (where I suggested
parsers associated with application system specs), but still
doable.

I agree it's a difficult problem.  Still, I believe that if you give up on
efforts to preserve much of the existing user interfaces then you
limit yourself to relative obscurity indefinitely.

> > > > All include both information in the server of the resource as well
> > > > as information from the process requesting access.  One extreme
> > > > is essentially an access list mechanism where the server remembers
> > > > which processes (this assumes that it can know where requests
> > > > come from) have the rights to which resources.
> > >
> > >The fatal problem with this is that the responsibility for filtering
> > >must not live in the server. Placing it in the server supports denial of
> > >service.
> >
> > When you say "filtering" what are you referring to?  Are you referring
> > to the right to communicate?  Namely that no process should even
> > have the right to send a message to a server without some sort
> > of authorization (capability)?  In that case it would seem we are
> > back to discussing denial of service.
>
>This is precisely what I am talking about.

Then we can agree to disagree about the significance of that problem.

> > Even in a "system" that supports such restricted communication
> > I believe there is a need to provide essentially open services.  How
> > do you imagine Web services will be provided in such a system?
>
>I don't really care, because you are making a silly argument. Yes, there
>are open services in the world, and access to these services needs to be
>widely available. This, however, is the exceptional case. Not the one
>that wants to govern system design.

Here again I think perhaps we disagree.  I see essentially all
services in this category of being open to everybody.  Consider
a file server for example (a server that makes physical storage
space available in logical units that can be shared).  You can
certainly argue that the right to communicate to the file server
should be limited to processes that possess file capabilities.

I argue that there are so many such processes that such
communication may as well be open.  That is,  the file server
must protect itself (as best it can) from resource consuming
denial of service attacks anyway - to protect from
the processes that legitimately have file capabilities.
If a person (or process) wants to launch a denial of service
attack, it isn't going to be limited from doing so by its
inability to obtain a file (or other) capability.

Believing that communication limitations will protect
against such attacks is what I see as "silly."  I at least
see this denial of service protection issue as secondary
to the issue of directly protecting against resource access.

Can we agree on that much?

--Jed http://www.nersc.gov/~jed/  



More information about the cap-talk mailing list