[cap-talk] Reference count based garbage collection seen as flawed
dmbarbour at gmail.com
Wed Jan 4 11:03:55 PST 2012
On Wed, Jan 4, 2012 at 12:51 AM, Jed Donnelley <capability at webstart.com>wrote:
> I don't agree that there is any problem here in distributed systems.
> Remote capabilities don't name objects; they name sessions.
> Hmmm. About that we seem to disagree (?). The above seems to imply
> different semantics for "remote capabilities" than for local capabilities.
> Why that? There are often times when I find it difficult to distinguish
> between "remote" and "local" capabilities (tightly coupled systems, shared
> memory systems, etc.). I don't believe there should be such different
> semantics. I believe the semantics should be driven by the most general
> case - namely the distributed case (hence, "Ain't it the truth!").
Treating all objects as remote is very onerous and inefficient. Treating
all objects as local causes problems due to conditions of disruption and
indeterministic latency. Even if your semantics are distribution friendly,
it can be useful to have abstractions just for remote elements. (I use the
notion of a proxy object, which can return disruption conditions.)
> There is a cost to each object and a value to each object. When the cost
> exceeds the value it's time to destroy the object. I realize that sounds
> somewhat flippant, but I believe that is the essence of how decisions have
> to be made about object destruction.
There is also a cost to decide the cost and value of each object, assuming
they are even decidable. If you use heuristic estimates of cost and value,
you will inevitably experience errors.
That may be acceptable if you ensure your programming model also has
effective resilience (self-healing) mechanisms, and you encourage libraries
or abstractions that are even more resilient at the application layers, and
developers can reason about which objects might be selected for destruction.
> I don't agree (again from the distributed perspective). I believe that
> codes have to be able to handle the destruction of any subset of their
> objects. Sometimes "bad" things happen. Deal with it.
That would be very onerous and painful to ensure and reason about. If you
want a language that will work at scale, you cannot impose such
requirements on developers. I recommend isolating where disruption,
indeterminism, or inconsistency can be `experienced`. Code should not have
to deal with destruction of arbitrary objects at arbitrary times.
At any point if there is an issue with resource limitations some resources
> need to be reclaimed. Destroy those with lowest value. That's not to say
> that mistakes were never made, but if made then more care was taken next
I fear your haphazard notion of security.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cap-talk