Mark S. Miller
markm at cs.jhu.edu
Fri Feb 2 10:02:08 CST 2007
Dr. Pierre Vignéras wrote:
>> Note that a buffer does not necessarily hold just bits: for example,
>> if a program starts threads which end up waiting for some condition
>> before proceeding, the collection of waiting threads constitutes a
> I don't see where this is different to a deadlock. What are the basic
> conditions for a gridlock (what is actually locking)? In your example, it
> seems that we have a sort of producer/consumer algorithm.
By deadlock (or, more precisely, control-lock), we mean a control-flow loss of
progress based on locking used for the scarcity-free logic of the program. By
gridlock, we mean a similar condition, but where the (sometimes implicit)
locking was needed to deal with scarce resources, rather than logical issues.
The test to distinguish these:
Given that the program got stuck at point A, if you had had more resources,
would that by itself have prevented it from getting stuck at point A? (Yes, a
gridlocking program may then proceed to exhaust your greater store of
resources.) In the limit, as the resources available approach infinite
resources, does the length of the program run before it gets stuck also
approach infinity? If so, then it's gridlock, not (in our terminology) deadlock.
Since computational models often assume infinite resources (e.g., Turing,
Lambda-calculus, Actors), such a program, when imagined to run in terms of its
computational model, does not get stuck. A deadlocking program still gets
stuck. Put another way, we can explain the loss of progress of a deadlocking
program within its scarcity-free computational model. We cannot so explain the
loss of progress of a gridlocking program.
> By the way, you use the term gridlock. Why Grid?
It's a term used to describe a traffic situation, in which a set of cars
jointly cannot proceed because of other cars blocking intersections. If you
took the same situation and made the streets longer (or the cars shorter), it
wouldn't gridlock then.
>>> In the case of a gridlock, can we recover from it at runtime (For
>>> example, by increasing the buffer space and restarting something
>>> (request or thread, ...))?
Often yes. For E, regarding inter-vat gridlock, definitely yes, because the E
computational model admits partition. This allows the implementation to
destroy an inter-vat connection at will, reporting a partition to the vats on
either side. We have not yet made use of this technique.
Text by me above is hereby placed in the public domain
More information about the e-lang