[cap-talk] Capabilities and Freedom vs. Safety
marcus.brinkmann at ruhr-uni-bochum.de
Mon Jul 23 05:35:10 EDT 2007
At Fri, 20 Jul 2007 11:27:29 -0400,
"Jonathan S. Shapiro" <shap at eros-os.com> wrote:
> Yes. And at this point we should note a philosophical issue. I ran into
> it while talking with the Hurd folks. The following is *my*
> characterization of the discussion. I am sure that I will not represent
> their views quite right, but I shall try. Definitions taken from
It's close, but I want to refine one statement:
> They argue that it is stupid to hide things from someone who
> can scan the drive.
That's not quite my opinion. Obviously, if the drive is encrypted,
scanning it doesn't do much good. However, I do claim that DRM is
harmful, but futile in the limit.
DRM is, in the limit, entirely futile. DRM intends to give only
controlled access to information. But information can, ultimatively,
not be controlled (as Thomas Jefferson observed correctly in his
letters). The qualifiers "in the limit" and "ultimatively" are
important. As the "trusted computing" machinery shows, you can
control a single machine pretty extensively. But ultimatively DRM
restricted media is supposed to end up in human's eyes and ears. At
that point, it must leave the technically restricted system and become
free. At that point, it can be captured and distributed without
restriction. It's the analog gap.
Many reasons why this is not as bad as it sounds:
1. Confined programs can't prevent reply to improve quality by
statistical sampling. Information retrieval technology has
advanced tremendously in the past to the point where it is often
harder to get rid of information than to recover it.
2. The effort of redigitization only needs to be done once for all
copies, so its marginal cost is zero. Also, the effort required is
proportional to the quality level desired, which is very low in most
cases (it's unbelievable what crap people put up with in P2P networks.
OTOH, film masters are sometimes leaked by insiders before official
3. The redigitization is theoretically complete: All information that
leaves the DRM box can be captured, and fundamentally, from a purely
information theoretical point of view, the box does not contain more
than what can be extracted.
The last to final part of the Harry Potter series had been turned into
an e-book within hours of the official release. The final part of
Harry Potter has been leaked to the public several days before the
official release, despite a 20 million dollar effort by the publisher
to suppress it.
While DRM is futile in the limit, it nevertheless causes real harm in
several ways. It's a wasteful use of our resources (as if backups
weren't hard enough to get right already!). As a perversion of
copyright law it blocks cultural progress (see Lessig, Free Culture).
As an access restriction system, it will be a tomb for data going to
be lost to future generations (in particular for media that are not
popular these days, see above). It's also invasive into the private
space of the people's *personal* computer.
Recently, the lost tapes of the NASA moon landing were rediscovered in
the basement of the physics lecture hall in Perth. They can now be
recovered. If they had been subject to DRM and encrypted at the time,
would that be the case as well?
> Unfortunately, the desire for freedom is not perfectly aligned with the
> need for safety. By safety, I mean "the need to preserve and maintain an
> environment that preserves the ability to *exercise* freedom
> consistently and effectively in practice".
"Security" is meaningless if one doesn't talk about subject and
object, ie who to protect against what? One important lesson of the
whole DRM story is that the world has to be protected *from* the
security community. The Sony rootkit has made it patently clear that
we need to turn to ourselves as the possible source of a security
threat. Under attack are access and availability of data, the
integrity of the personal computer and data, and the relationship of
power in the information society.
You raise an interesting point about networked systems:
> In the real world, systems are networked. One consequence is that your
> mistakes have a negative impact on me, in the sense that your
> compromised system becomes a basis for attacking mine. The interaction
> of your freedoms and my freedoms must therefore be considered. In
> consequence, some of the actions you might take on your machine cease to
> be exercises of freedom and begin to be exercises of license: actions
> that undermine the freedom of others or the liberties of society.
The problem you describe is real enough. The solution you offer is
tempting, but potentially dangerous. History seems to favor losely
coupled networks of peers over restricted networks. Wikipedia is
successful, while Nupedia faltered. The internet is successful, while
BTX faltered. This may mean that the (perceived) benefit of open
networks exceeds their costs even at a global scale.
I am sure you can give me estimates of the harm done by security
defects in the current internet, or at least give examples and quickly
point me to research about it. I realize that this harm is very real
and can also be very personal. I don't want to diminish it. But what
about the harm done by access restrictions? Who is going to quantify
that harm? Naturally, people working in the security industry have
little incentive to quantify the costs of their technology beyond
initial deployment. But that doesn't mean it is insignificant. Ross
Anderson has done some work in this area, but much more is needed.
In the absence of an extensive cost/benefit analysis, on what basis
should such decisions be made? Clearly we all have ideological bias
and preferences, but if we ask what's best for society, we need
something more substantial than that.
> valid reasons for their choices. Ultimately, my disconnect with their
> architectural philosophy is that they do not appear to accept the goal
> of supporting the oblivious safety of others. Perhaps that is not fair;
> it is possible that either they haven't gotten this far in their
> thinking yet, or that they have addressed the issue in some way that I
> have not yet discerned.
I support the goal of oblivious safety of others. I do not
necessarily support the means you propose (as far as I understood
them). In particular, I do not narrow myself artificially to
technological restriction measures and the free market principles to
achieve this goal. In fact, even if we can solve the problem in the
internet by technology, we face the same issues in many other domains
of human life, such as environmental pollution and nuclear
proliferation. These are extremely pressing large scale issues which
require very different strategies than the ones we have discussed.
Strategies of equity of living conditions, education, and cultural
change, just to give examples. And with apologies to Marc, I don't
think that defensive living and US hit-teams will help. No man is an
More information about the cap-talk