[cap-talk] Google Chrome - web browser with sandboxed rendering
wil.pearson at gmail.com
Tue Sep 9 15:38:58 CDT 2008
2008/9/9 David-Sarah Hopwood <david.hopwood at industrial-designers.co.uk>:
> Raoul Duke wrote:
>>> Personally I'm going for 4a where the computer system manages its own
>>> security settings for resources, occasionally getting it wrong but
>>> able to recover from errors with a bit of feedback on how the system
>>> is performing from the user.
>> your point about "getting it wrong" is a great one to make; i would
>> assume there is no way any system will get "it" right all the time, so
>> if one is going to properly do one's job the question of: uhm, gosh,
>> what do we do when things fail or the wrong choice is made? is very
>> (in fact, if one could answer it in such a way that there was no
>> danger when things do go wrong because recovery was always an option,
>> then that could free up the rest of the system to be more
>> loosey-goosey. [...]
> That depends on whether the system "gets it wrong" in the direction
> of permissions that are too tight, or too loose. If too tight, then
> that isn't difficult to recover from. But if they are too loose, then
> even if revocation is possible,
> - confidential data may already have been compromised;
Confidential information is compromised every day with human
organisations e.g. social engineering, the question is can we do a
comparable job with automation. Short term probably not. Long term it
is an open question.
> - there is a problem in detecting whether the permissions are
> "wrong", since the system will appear to work fine. So
> there's no way for a human to detect this situation except
> by exhaustive inspection of permissions metadata (which is
> unlikely to happen), and analysis of the transitive authority
> implied by those permissions (which is even less likely to
I'm envisaging a system where most permissions have to be maintained
by using credit from user feedback in an economy, and if they aren't
they are revoked. So a malicious program would have to be productively
enmeshed, or consistently conning a program, in this economy in order
to breach the security in the long term.
> Having said all that, if the inferred/default permissions are
> systematically too tight, then there will be too many dynamic
> security dialogs (not all of which can be combined with designations)
> and the user will be effectively trained to always click 'Grant'.
> For that reason, I don't think that putting too much reliance on
> having the system "manage its own settings" is a good strategy.
> I would rather have a system that is highly predictable and
> entirely avoids any use of heuristics in setting permissions.
> I don't think that this necessarily places more work on end-users,
> although it might place more work on application designers and
The value of automation of security to a user is dependent upon
whether the computer can do a better job of managing it than the user.
People comfortable with compsec would probably not benefit from
automation until it is very advanced. But for other users the elderly
etc. they may find it useful when it is primitive if they don't have
the knowledge themselves or someone who can do it for them.
I'd agree simple heuristics are not a good idea. But more advanced
techniques of automation are surely worth research in the long term.
More information about the cap-talk