[cap-talk] High level dissonance (was: Re: What sparked interest in capabilities)
lists at notatla.org.uk
lists at notatla.org.uk
Fri Feb 22 05:25:55 EST 2008
"Stiegler, Marc D" <marc.d.stiegler at hp.com> wrote
> ... the human really does, from time to time,
> have to make a decision to say No!, or sometimes Yes!, to
> a dangerous authority request, even after we have made the
> interface reasonable. ...
> they have a CapDesk. I say, of course they are unable to protect
> themselves, you idiots, because you beat them into helpless
> submission. People know perfectly well in the physical world
> never to sign blank checks, but they have been bludgeoned into
> accepting that signing blank checks for everyone and everything
> is a requirement to get any work done in cyberspace.
You are going to run into Shaw's saying (which predates IT mis-training)
"Freedom means responsibility. That's why most men dread it."
> ... example problem with the request for C
> drive authority is not properly solved simply by making it
> impossible for users to grant C drive authority. Users have
> a legitimate need to be able to grant access to the whole C
> ... great place for user interface invention, to assist
> the user in making good decisions). Since the user must be
> able to grant very dangerous authorities, he must therefore
> be able to participate in his own trade offs of trust and
> authorization. Which means that a crucial part of making a
> transition to a CapDesk-style world is imbuing the user with
> the realization that he now has the tools so he can succeed,
> and he should no longer be subservient to the worst software
> he encounters.
I assume the context for the above text is a user who is the owner
(and sole user) of the desktop computer in question.
Different conditions (including making it impossible for users to
grant C drive authority) might apply to end users (as opposed to
desktop support staff) in a company where some of the trade-offs
get decided by someone else.
Central to computer security as I see it is the fact that the overall
behaviour of a system depends on parts contributed by many people
who have different aims. Who's in control? A bit of everybody;
from the hardware designers and manufacturers to the authors (whoever
they are) of all the software, and the people who installed what you
hope was what you wanted, plus everybody who since original installation
has ever had enough access to interfere with whatever properties you are
interested in and can't check by yourself.
With traditional ambient authority everybody who could interfere comes
close to perhaps eveybody whose computer has sent us traffic - and
everybody who controls _those_ computers and so on.
This is bad enough for a home user who might manage tripwire-style
validation while booted from a live CD. It gets truly impossible
in my work doing Unix security in a bank where in 2,000 Unix servers
there are 40,000 root shells a day (that I know of) and many of the
hosts have some degree of trust in another (via NFS, rsh/ssh,
"enterprise software", clustering etc).
Not to mention vast groups of developers and application
designers who store fixed passwords in applications and whose
favourite filemodes are 0777 and 04777. And that's not just
the internal developers either - IBM and BMC aren't blameless.
(MUCH MUCH more depressing stuff could be written along these lines.)
To stop this madness we need a radical change of direction. Brian Snow
was right to say "we need assurance".
Revocation is related to assurance (at least in current systems) as
when we give someone "temporary" root access to install an application
or whatever we like to think that he really did give up the root shell
by the end of the shift and that the things he did match what he was
supposed to be doing. Actual verification of this is lacking.
Attitudes to this include:
- A whole root shell is too much to give.
(But we have to start from where we are and some vendor's installation
docs tell you some daft things (like "first put + in /.rhosts") and
somebody is going to want to follow it literally.)
- Some intermediate enterprise s/w allows finer-grain control.
(True, at application level and at significant cost in permissioning
arrangements. Then the organisation comes to see the "enterprise software
interface" as reality and ignores underlying weaknesses.)
- Virtualisation with filters may allow you to throw away unplanned
changes of state.
(Is defining the state filter any easier than going to the goal directly?)
- How much can you rely on retrospective punishment of abusers, after
all an account is suposedly "the unit of accountability"
(Not that much if it's a free-for-all and nothing can be attributed.)
- If that's the case for root how much worse are the application components
where the real work happens (they can't possibly be better)?
They say (Adams & Sasse quoted in "Security and Usability", ISBN 0596008279)
the users are not the enemy, but Yossarian said "the enemy is anyone
who's trying to get you killed; no matter which side he's on".
When I can't even get people to stop running root cron jobs in directories
of mode 0777, or designing applications where authentication and authority
are in different programs (nobody would run one without the other would they,
that would be cheating) it's enough to make you want MAC. But the MAC that
I want has to include the freedom for people to do their jobs that they know
and I don't. It may not be the C drive but they get to grant something.
And while all that is going on most of the "security work" being done is
- Displacement activity tweaking (or talking about tweaking) the settings
on some tool or other because it exists and it's something we can do
- Gathering more data that some undefined person might perhaps act on
before the next ice age
- Making excuses.
It's time to get back to Dilbertland for another day.
More information about the cap-talk