Sun Aug 16 16:37:50 CEST 2009


It's generally agreed that automatic, non-synchronous GC and
finalizers interact badly.

Where does the problem lie?  That depends on how you look at it.

      - GC should be synchronous

      - Represent resources as ``pooled-with-spare'' to behave more
        like memory.

Which one is best?  Ultimately I believe that representing all
resources as pooled-with-spare is not realistic: essentially we're
bounded by _physical_ resources, and if there's only one, you better
have synchronous GC.

But, if these resources _are_ modeled as memory, it might work: any
pooled resource that runs out of candidates to hand over will issue a
_global_ GC to determine if it is still reachable.

The global nature of GC makes it rather hard to manage.  This makes me
think that RC based management really isn't going to go anywhere,
unless you can propagate the ``out of'' event all the way up from
device drivers to the toplevel memory GC.

It is probably possible to remove some of the arbitrary RC managed
resources (i.e. Unix file handlers) by ``peeling them open'' to reveal
the real resources (hardware signalling the device driver), and have
them propagate these signals all the way to the top level GC whenever
they occur.