[cap-talk] POLP v. POLA
jed at nersc.gov
Fri Nov 4 17:05:27 EST 2005
Jerry Saltzer wrote:
(bcc'ed to Jerry Saltzer, Mike Schroeder, and Frans Kaashoek to
protect their email addresses from Web archives - their choice for
future communication to the list)
>Thank you for the note and the pointers to the discussion thread. I
>have taken the liberty of forwarding your note to Mike Schroeder,
>the co-author of the 1976 paper, and also to Frans Kaashoek,
>coauthor of a textbook we are writing on design principles for
>computer systems. The textbook contains a chapter on security, in
>essence a modern version of the 1975 paper, so the topic of the
>discussion is of interest. I have also copied both of them on this note.
I very much hope that some integration can occur with the thought
represented in the cap-talk community.
>I think that the concern that the discussion thread raises about the
>"principle of least privilege" comes from adopting a substantially
>narrower view of the meaning of "privilege" than the one that
>Schroeder and I had in mind in the 1975 paper. If you identify
>"privilege" as meaning just the current settings of some permission
>bits, then the principle points in a useful direction but it doesn't
>take you very far in that direction; you need to add something else.
>But if you take "privilege" to encompass permissions, authority, the
>ability to acquire authority, accountability, and anything else that
>a principal has and that an attacker could exploit, then the
>principle of least privilege offers significant useful guidance. It
>was in that spirit that the 1975 paper proposed the principle.
>As one example of this broader interpretation, I call your attention
>to the paragraph in the paper that reads:
>"For example, a user may be accountable for some very valuable
>information and authorized to use it. On the other hand, on some
>occasion he may wish to use the computer for some purpose unrelated
>to the valuable information. To prevent accidents, he may wish to
>identify himself with a different principal, one that does not have
>access to the valuable information--following the principle of least
This notion of "principal" as (from your paper) :
"Since an association with some user is essential for establishing
accountability for the actions of a virtual processor, it is useful to
introduce an abstraction for that accountability--the principal. A
principal is, by definition, the entity accountable for the activities
of a virtual processor. In the situations discussed so far, the
principal corresponds to the user outside the system. However,
there are situations in which a one-to-one correspondence of
individuals with principals is not adequate. For example, a user
may be accountable for some very valuable information and
authorized to use it. On the other hand, on some occasion he may
wish to use the computer for some purpose unrelated to the
valuable information. To prevent accidents, he may wish to
identify himself with a different principal, one that does not have
access to the valuable information--following the principle of
least privilege. In this case there is a need for two different
principals corresponding to the same user."
seems to me to push one toward the general model that we see in
nearly all modern computer systems (specifically the Unix variants
and Windows) where programs run as "user"s. On the cap-talk list I
believe this model is referred to as the "ambient authority" model, e.g.:
where the authority that programs run with is defined by the "user"
environment they are identified with. I believe this model to be
essentially broken and lead directly to the practical difficulties
that have become rampant with various forms of Trojan horse in
network connected systems (e.g. computer viruses). Certainly it is
possible to run applications (e.g. a potential Trojan horse) as a
different "principal"/user. However, how then to identify for any
program a principal/user appropriate with access rights (which would
seem to lead by recursive descent to the "authority" or "privilege"
for that principal/user)? How to set up such dynamic principals when
new applications are to be run?
It seems to me that if one is to apply POLP/POLA to computing then
each program (thread, active object, generally an active subject)
must be run in it's own domain with appropriately established access
rights. I further believe that the key enabling technology for such
appropriately restricted computing is the communication of access
rights (leading to privileges/authorities). For me the term
"capability" serves as a generic communicable access
right. Capabilities in this sense are the dynamic unit of
communicable authority. I believe an important contribution by the
Capability Myths Demolished paper (e.g.
http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf) was to focus attention on
the dynamics of authority communication and away from a static (e.g.
rows for capabilities vs. columns for ACLs) model of access
control. During my career there have been several times where I've
implemented what I refer to as "capability communication" using an
access list implementation, e.g.:
What's important from the viewpoint of the "principal"s involved is
that they be able to effectively establish appropriate authority for
any running program. In today's market leading operating systems it
is simply not possible to do so. While there are some efforts to
patch on POLP/POLA mechanisms to existing systems, e.g. plash for
Unix (http://plash.beasts.org/ ) or Polaris for Windows (
http://www.hpl.hp.com/personal/Alan_Karp/polaris.pdf ), these cannot
really be effective until all programs run in an appropriately
restricted environment rather than with the ambient authority of a "user".
Do you see how that will be accomplished Dr. Saltzer? I'm personally
embarrassed for our profession and disappointed at the how little
we've been able to accomplish commercially in this area - despite
well known working models (e.g. the many capability computing systems).
>Similarly, the later discussion in the paper of information flow
>control and non-discretionary access control identifies those
>concepts as an application of the principle of least privilege.
I'm sorry, but I don't see the "discretionary" or "non-discretionary"
nature of access control as relevant. To me what is important is the
privilege/authority that the finest grained computing subjects run
with - and how that privilege/authority is established. Perhaps you
can explain to me (us?) how non-discretionary access controls can
help. To me such controls only seem to be relevant in a computing
environment where programs necessarily run with shared of
user/principal privilege/authority - which as I have already argued
is directly contrary to POLP/POLA.
>The discussion thread also seems to question whether or not the
>principle of least privilege is sufficient to assure security of a system.
Hmmm. I didn't pick up any such question. Perhaps you could point
it out. To me POLP/POLA is as good as it gets. Of course within
that principle one must balance practical concerns with efficiency,
etc. in terms of the fineness of access control, but it seems to me
that the base principle is as much as one can ask for - as we would
seem to agree from this:
>My take on design principles is that not only should they be
>construed broadly, they should also be viewed as guidance rather
>than absolutes. Most real system designs require trade-offs among
>design principles. The designer shouldn't assume that all he or she
>needs to do is follow the principles slavishly and everything will
>come out OK (in this case, with a secure system.)
>Please feel free to pass this note along to the discussion
>group. Mike and Frans may have different takes on this question, so
>don't be surprised if either or both of them also respond.
> Jerry Saltzer
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cap-talk