[cap-talk] definition of the term "safe language"
Jonathan S. Shapiro
shap at eros-os.org
Sat Apr 10 20:29:49 PDT 2010
On Sat, Apr 10, 2010 at 2:36 PM, Sandro Magi <naasking at higherlogics.com> wrote:
> On 10/04/2010 3:22 PM, Jonathan S. Shapiro wrote:
>> Three "litmus test" questions:
>> 1. Suppose we have an existing, safe language. We now add to it a new
>> abstraction, but we implement the protection of that abstraction
>> imperfectly. This could be either a deficiency of comprehension, of
>> specification, or of implementation. It doesn't matter for purposes of
>> my question. The key thing is that the new, incomplete abstraction
>> does not break any pre-existing contract. Should we declare that the
>> new version of the language is "unsafe"?
> 1. Unsafe if you're talking about the specification, since you can now
> express programs that violate an abstraction; doesn't matter that
> existing programs just happen not to break anything. If the
> specification is safe but the implementation contains a bug, the
> implementation is unsafe and is not faithful to the spec.
I was careful in the way I framed this. It is not merely that existing
programs do not "happen" to break anything. I framed the problem in
such a way that no *future* program -- including those that make use
of the partially complete abstraction -- can break any property that
was previously upheld by the language. The enhanced language is "safe"
in every respect that the original language was safe. Since the
language was previously safe, and no safety has been lost, it is
difficult to see why the enhanced language should somehow be unsafe.
One problem with your position is that it will simply lead to
arguments about what abstractions a given language does or does not
support. This will rapidly devolve into fights about (a) what is an
abstraction and what isn't, and (b) which abstraction claims the
language purports to make. In this case, I was careful to state that
the new abstraction's protection was known and aknowledged to be
imperfect. It is much more sensible to say that the particular
abstraction isn't fully present than to say to declare the language
The other problem is that your approach invites invidious marketing.
Under your definition, we can have a language A that makes fewer
abstraction claims but is safe, and a language B that protects a
superset of the abstractions of A even though it fails to enforce some
particular new abstraction. Under your definition, we would declare
that B is unsafe, and we would thereby mis-represent A as "better"
than B, even though this is untrue.
What I firmly dislike about your definition is that it invites people
to "game" the discussion. I would very much prefer either of the
1. That there is some list of well-established properties
(abstractions) of interest, that we declare this list to define what
we mean by safety in our conversation, and that we then rule languages
in or out according to an agreed set of criteria.
2. That we allow languages to state which properties they do and do
not enforce/defend (that is: to specify safe with respect to *what*),
and then examine whether those properties are sufficient to meet our
>> 2. What about for an abstraction that the language doesn't actually
>> *claim* to support, but does actually support in an imperfect way?
> 2. Can you give me an example? I can interpret this question a number of
Sure. People here have been bitching about Java floating point. Java
makes no claims to the computational sensibility of non-strict FP
(aside: the entire notion of computational sensibility for IEEE
floating point is a proposition yet to be demonstrated anywhere). Use
of non-strict FP does not violate any other safety property of Java,
and no claims are made about FP in non-strict mode. Does the presence
of non-strict FP render Java unsafe merely because it isn't what you
wanted? I do not believe so.
>> 3. A language might provide "divide by zero" safety by re-defining the
>> operation to return zero in this case (this is exactly the kind of
>> thing that ACL2 does to make LISP functions complete over their
>> domains). The result is safe, but nonsense. Should such a language be
>> considered safe? What abstraction has been violated that justifies
>> withholding the label "safe"?
> 3. Safe. No abstraction has been violated, even though the semantics may
> be unfamiliar or useless.
I do not agree. The language achieves its alleged safety by violating
a very fundamental abstraction: arithmetic!
If a language chooses to implement consistent and well-founded but
unusual semantics, and thereby achieve safety, then it seems to me
that we are debating whether one choice of semantics is better than
But if a language chooses to implement non-sensical or ill-founded
semantics to support the claim of safety, then I think that's
"cheating". I would argue that the entire rationale of safety is to
ensure the consistency and well-founded behavior of the programming
semantics, and that achieving one at the cost of the other constitutes
a failed (i.e. unsafe) result.
>> It isn't pointers that make C# unsafe. It's pointer arithmetic. A
>> distinction, regrettably, that seems to have been neglected in the
>> specification of C#.
> To be more specific, it's the combination of pointer arithmetic and
> dereferencing that's unsafe.
I concur, modulo dependent type. So to be even more precise: it's the
inability of current type systems to sufficiently capture and
constrain the combination of pointer arithmetic and dereferencing.
More information about the cap-talk