[cap-talk] Security choices - human vs. automated (was: Google Chrome - browser sandboxed)
capability at webstart.com
Sat Sep 6 19:24:16 CDT 2008
At 12:38 PM 9/6/2008, William Pearson wrote:
>2008/9/6 Raoul Duke <raould at gmail.com>:
> >>> That's because sockets are not reified in current user interfaces.
> >>> That's not to suggest that they could not be, however, although it would
> >> it's an easy problem, but isn't it our job to suck it up and find
> >> something that will be intuitive and understandable for users?
> > (going off into la la land for a bit: to my mind there seem to be 3
> > choices about what to do.
>1 is to leave things the way they are today,
> > where the degree of safety is related to the degree of experience and
> > knowledge of the user.
>2 is to therefore indoctrinate everybody in the
> > nuances of computer literacy so they may be better judges.
>3 is to
> > instead of educating try to map from the computer system to age-old
> > human social signifiers: if there were a way for ui to leverage
> > humans' natural social awareness, that might help; there are real
> > world situations where humans are equipped to have a gut feel for
> > risk. ok and nobody suspects the 4th which i guess could be some
> > incredible AI system that gets it all right w/out any human decision
> > making in the process.)
>Personally I'm going for 4a where the computer system manages its own
>security settings for resources, occasionally getting it wrong but
>able to recover from errors with a bit of feedback on how the system
>is performing from the user.
Hmmm. I wonder if it might help for us to consider the sorts of
choices for which a person is qualified to make a choice that seems
beyond automation and then to eliminate all other choices (i.e. find
ways for computers to make the choices they can make for themselves).
I'll start with an example. I'm running a tax computation
program. It throws up an open file window in a power box
asking for me to give it my last years tax return. This
running program may be able to do a better job of helping me
to complete my current tax return if it has access to my
previous tax return. I believe I'm the right entity to
make the decision about whether or not to give this running
tax program that access. If I don't give it access, some
of my private information from last years taxes will be
better protected. On the other hand, much of the information
in last year's tax return may be relevant and save me time
by being available to this years tax preparation program.
Also, this year's program likely will have access to information
as sensitive as last year's program - though possible not.
If so, giving it access to last years data will not result
in a significant additional risk.
I believe I can make a reasonable decision given this choice.
I don't see how such a choice can be automated.
I believe there are many more such choices that fall into
this category. Those that come most readily to mind are
choices where I know the semantics of some information and
it's sensitivity, but my software doesn't because I haven't
yet informed it.
Are there choices that can't be automated but are also
unreasonable for humans to make? I can't think of any
off hand. If we can develop such a category, then
perhaps we can work on shrinking it.
More information about the cap-talk