[cap-talk] bundling designation and authority
smagi at naasking.homeip.net
Thu Oct 13 18:50:58 EDT 2005
David Hopwood wrote:
> URLs are not authorized or unauthorized. The relevant test is whether
> "the *access* he intends to secure is unauthorised" [emphasis added].
> Accesses are associated with a particular context and intent; URLs are
> And I maintain that the web was designed to allow URL guessing; there's
> nothing suspicious or "dodgy" about URLs that are guessed or constructed
> from other URLs, just because they are so guessed/constructed.
I agree. The idea that there are no such things as "dodgy" URLs is not
in contention *if* they are divorced of context. Clearly a URL meant to
overflow a buffer is "dodgy" in the context of knowledge that sending it
to a particular server will cause a security breach. I expand further on
>>We cannot say what the URL he did or could
>>have fabricated might have granted him.
> It very clearly wasn't going to grant him any access that could reasonably
> be viewed as an attack.
Again, you are assuming a particular site implementation. While it seems
pretty silly to design a site this way, it's certainly not impossible.
>>But he did intend to obtain additional information above and beyond that
>>which was explicitly authorized.
> There's no requirement for the authorization to be "explicit" in the Act.
> I could just as well argue that there is implicit authorization to access
> *any possible* URL on a public website.
> (I'm not arging that, but I don't see why it is any less reasonable a position
> than the one you're putting forward.)
I think it is reasonable, *from our perspective*. It is not necessarily
reasonable from a layman's, or typical web developer's perspective.
>>>The specific access that Cuthbert made didn't cause, and *could not* have
>>>caused, any harm to the server or any leak of secret information. That
>>>point was not even disputed in the court case.
>>Again, we cannot claim that absolutely.
> If there had been a claim of harm to the server, he would presumably have
> been prosecuted under section 2 or 3 of the Act. The Register article also
> # A witness for BT confirmed that the attack would have had no effect on its
> # server, running Unix Solaris, even if it had not been detected by the IDS.
> # The Crown also accepted that there was no malicious motive in Cuthbert's
> # actions.
In the above, I was not disagreeing with "harm to the server", I was
disagreeing with "leak of secret information". We cannot make
assumptions about how the information on the server is stored.
>>I do this as well.
> Right. So you're just arguing a Devil's Advocate position?
Only to a certain extent. I can understand the position taken by the
court, or at least, a number of ways they could reach such a decision.
I'm trying to understand how to reach a more sensible position, either
by discovering some unfounded assumption or non sequitur.
I've read some good arguments to this end.
>>I clarified precisely what I meant by "unauthorized access" in a
>>follow-up e-mail. Essentially, the transitive closure of all links
>>served by the web server starting from the homepage are the authorized
>>URLs. The link fabricated was not in this set, thus it is unauthorized.
>>I'm not sure if this logic was used in the case, but it follows directly
>>from the articles I've read and fits with the law in question.
> I don't see how it does fit with the law in question. It seems to be your
I think I summarized it clearly here:
The charges had to satisfy all of the following criteria:
1. "he causes a computer to perform any function with intent to secure
access to any program or data held in any computer;" (he was browsing
2. "the access he intends to secure is unauthorised;" (?)
3. "he knows at the time when he causes the computer to perform the
function that that is the case." (dependent on #2)
How authorization was determined is currently unspecified. I provided
the follwing argument as a possibility:
To quote you, "Accesses are associated with a particular context and
Suspecting a phisher, the defendant openly admitted that he attempted to
obtain additional information (intent) not present on the given web
pages (the authorized context).
While individual URLs divorced of context do not have authorization
content, once bound in a web program, these URLs do take on meaning.
After all, URIs/URLs are "resource identifiers/locators"; given the
context of a particular web server, each legitimate URL has significant
meaning when compared to URLs that would return 404.
Also keep in mind that a large percentage of web developers out there
probably have little to no knowledge of the RFCs in question; they work
within a framework, like ASP.NET, which hides the details of the
underyling protocols. To them, the argument I have laid out potentially
makes a lot of sense.
More information about the cap-talk