[cap-talk] Cap vs. cap + password
smagi at naasking.homeip.net
Sun Nov 27 12:32:03 EST 2005
-----BEGIN PGP SIGNED MESSAGE-----
David Wagner wrote:
> There will be some intent behind it, but the intent may not match the
> effect. The user may intend to show a correspondent a copy of this web
> page. That much is intentional. The part where the correspondent also
> gains all the rights and capabilities of the user, might not have been
Obviously, but we come full-circle to my point that the user must be
informed that their authority is bundled in the URL. As I said in a past
e-mail, "don't forward a link unless you want the recipient to see what
you see, and be able to do what you do".
There is always some form of access control. In typicaly webapps, would
the user forward the link and provide their login information to their
correspondant? I'm not sure how the cap-URL approach is any worse than
typical webapps in intentional forwarding. I think the only real concern
is unintentional leaking, like your example below, and others I cite
> Also, it's not always possible to line up incentives the way you suggest.
> Frequently we use access control to control access to a resource that is
> shared among several users. If any one of those users misuses the
> resource, that could harm all the other users. Now all of the users are
> dependent on each other, and unintentional leakage of a YURL hurts not only
> the leaker but also all the leaker's collaborators.
In a proper design, each user would actually have a revokable forwarder,
so at most the forwarder itself would be compromised. But again this is
not a problem with cap-URLs perse, but any shared resource. I'm not sure
how user authentication-style apps are any better here.
> I'll give you another example where unintentional leakage could occur.
> Much of the current web infrastructure doesn't treat URLs as very secret.
> URLs are stored in log files, in caches, and manipulated by NAT boxes.
> They're leaked by adware and by services for finding related content.
> They're revealed by Referer: headers. If you mistype a URL in the browser's
> "Location:" bar, that URL might get sent to the browser's default web
> search engine. The latter could lead to unintentional leakage of secrets,
This is a good example of unintentional leakage too. It always seems to
come back to the browser being the source of leakage. There is no
webapp/browser equivalent of a "powerbox".
Do browsers perform this auto search engine lookup even for SSL URLs?
Perhaps browsers are less cavalier with user info for URLs that are
supposed to be "secure" (since a mistyped URL submitted to a search
could be used to hijack a session in current systems).
>>A policy stating "don't ever do this" seems pretty clear-cut.
> If it were this easy, "secret URLs" would be an effective way of managing
> access to web pages. Yet that doesn't seem to have worked in practice.
Perhaps an example would illustrate this better; can you provide one?
Assuming the URLs are indeed "secret" (unguessable, not linked from any
publically accessible pages, etc.), then I suspect it's because the
people who created these "secret URLs" didn't understand the web,
browsers, etc. See the section titled "Surely listing sensitive files is
asking for trouble?":
"Someone may publish a link to your files on their site. Or it may turn
up in a publicly accessible log file, say of you user's proxy server, or
maybe it will show up in someone's web server log as a Referer. Or
someone may misconfigure your server at some future date, "fixing" it to
show a directory listing."
>>>"Safe link" sounds like a great idea. What exactly is the definition
>>That's a good question. I imagine it largely depends on context.
> Ok. If it depends on context, how does the browser compute a "safe
> link" from a YURL? Does there need to be a special protocol, implemented
> by every web server, by which the browser can ask the server for a safe
> version of this YURL? I hope the servers get this right.
The browsers wouldn't have anything to do with this. The application
would have to delineate "safe" and "unsafe" content.
> Transitively read-only isn't necessarily enough. If I'm viewing an Amazon
> product page for a left-handed smoke shifter, and I want to send a copy
> to a friend, I might not want them to be able to view the contents of
> my shopping cart, even though there is a link on the Amazon product page
> to my shopping cart.
The programmer is most informed as to what content belongs to the user,
and what content is generic. A "safe link" in this context constitutes
the URL of the global shared state (ie. the product information). The
application itself would have to provide "safe linking" features, not
the waterken server (though it can perhaps provide a default mechanism,
such as read-only, copy-on-write, etc.).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
More information about the cap-talk