[cap-talk] More Heresey: ACLs not inherently bad

Marcus Brinkmann marcus.brinkmann at ruhr-uni-bochum.de
Thu Sep 25 12:32:48 CDT 2008


At Thu, 25 Sep 2008 15:19:59 +0000,
"Karp, Alan H" <alan.karp at hp.com> wrote:
> Marcus Brinkmann wrote:
> 
> > > O The virus problem
> >
> > I know that the theory is that Unix is just as vulnerable to viruses
> > as Windows, and that attackers are just picking the lowest hanging
> > fruit.  However, although from an operating system design perspective
> > Windows and Unix are not very different, the socio-economic factors
> > are very different between Windows and GNU/Linux.  The virus problem
> > may be attributable to other causes than the failure to apply POLA at
> > a finer level than users.  I think we are lacking data for comparison.
> >
> A Linux version of the Love Letter virus would be as effective as it
> was on Windows.  Opening the attachment grants the programmer of the
> virus all the user's authority.  There is no fix because the system
> is running the way it was designed to run.  I believe the reason we
> haven't seen such attacks on Linux is the dearth of toolkits for
> generating them.

Doesn't the majority of people use webmailers?  Running executable
attachments is not different from running executables from the
browser---we almost lose nothing by making it difficult or even
impossible.  There just is no need.  In a free software environment,
executables come from your distributor, not over email.

No system I know provides resistance against "click here to see
pictures of cute animals" attacks.  This is also true for capability
systems.

> > > O The problems with managing ACLs are well known
> >
> > My main interest are desktop systems.  These systems must be
> > preconfigured, and their configurations do not need to be very
> > dynamic.
> >
> Which is why every process runs with far more authority than it
> needs for the task at hand, making users far more vulnerable to the
> variety of attacks that we see.  I tracked the Microsoft patches for
> 18 months.  Over 3/4 of the vulnerabilities they patched enabled the
> attacker to take control of the running process.  If that process
> had few privileges, the attack wouldn't have been worth doing.
> However, you can't do that if you assume the configuration of
> permissions is nearly static.

However, attackers have the privilege of choosing whatever attack
venue they want.  Hardening solitair.exe gives you nothing.  I am not
familiar with Windows vulnerabilities, though, so I can't comment on
your analysis.  Were these local or remote attacks?

> > > O managing rights at fine granularity
> >
> > This presumes a need for managing access at that level, which I only
> > see for a couple of applications.  I am not worried about solitair.exe.
> >
> It's an everyday problem.  I want to share with you access to a
> document we're working on together.  My only real option is to let
> you mount my drive, but that gives you access to far more than I
> would prefer.

Is this really an option?  The only reasonable option I see today to
share a file is by sending it over email.  I probably wouldn't even
get through your firewall (at least not legally :).  Even when sharing
documents via capabilities on a local computer is safe most
collaboration will not occur within a single organizational unit (node
or cluster for distributed capabilities).  Do you think things would
change?

> > > O User-Group-World is too crude a sharing model.
> >
> > I am not sure which use cases require that fine manipulation of ACLs.
> >
> Alice shares with Bob.  Bob wants to share with Carol, but he can't
> without being allowed to change the ACL.  If it's not convenient for
> Alice to change the ACL for Bob, then Bob will give Carol his
> credentials.  By making sharing at a fine grain impractical, you
> lose security by forcing people to give out all their rights in
> order to do their jobs.

Alice, Bob and Carol can just share copies of the documents they are
working on.  If there is a bigger project, they can create a group
using some collaborative platform, such as version control systems or
a CMS.  This may require the involvement of an administrator, but as
there are usually more requirements anyway, this doesn't do much harm.
For example, the administrator may also take care of software updates,
backups, etc.

It is true that they can not share easily computational processes on a
single node on their own.  But how important is this in today's times?
Even if you could offer me a capability directly to your document, I
would prefer making a copy of it and working on it offline locally.
And you would prefer that as well, as otherwise you risk losing the
valued document by me overwriting it.  Unless you have processes in
place that take care of versioning the document, backing it up, etc.
If that is the case, managing the authority is just one piece in a
larger bundle, and the relative cost difference in how the authority
is managed (ACL, capabilities) may be negligable compared to all the
other things that need to be done.

Of course, I am speculating here, as we don't have systems we can
compare directly.  But it is not true that people today need to give
away all their authority to get the job done.  I can give you an
account to my wiki without giving you an account on the ftp server or
my email, etc.

> > > O Object capabilities make it easier to write applications in which
> > > a single breach doesn't compromise the entire program.  Security
> > > reviews of such programs are considerably simpler.
> >
> > >From my experience of working on the GNU/Hurd, debugging such
> > applications is a lot harder, because it is harder to identify objects
> > and the processes implementing them and following execution flow
> > across process domains.
> >
> Limiting the permissions individual processes (or objects) have
> reduces the number of places that could have produced some error.
> Take a look at the DARPA Browser report or the recent security
> review of the Waterken server.

I agree that this is also true.  I just disagree with a simplified
view that one system is strictly easier than the other to work with.
This is not my experience.  Some things are easier in the one type of
system, some in the other.  Of course, some of this is just a matter
of having the right tools: gdb can follow execution flow in the case
of fork+exec but not in the case of capability invocation.  That is a
feature that can be added, but it is more complicated nevertheless.

The benefit in security review is probably due to encapsulation and
explicit interfaces.  Comparable benefits can also be achieved by
other means, such as language design or coding conventions.  Explicit
interfaces can straighten one out considerably, but they can also
dampy productivity if the need for them arises too early in the design
process.  To me, it's a wash, and I usually choose strategies
according to requirements.

I should also mention that different protection domains is
unfortunately not identical to different failure domains.  I had to
learn that the hard way, and it took me a long time to understand the
difference.  Unfornutaley, it is much harder to create a new failure
domain than to create a new protection domain.

> More later.  I'm about to disappear into a 2-day meeting :(

Sounds awful.  Good luck!

Thanks,
Marcus



More information about the cap-talk mailing list