libcwal  Artifact Content

Artifact 20c5241c870a1f9aa93ed9bb9e5af23cbe026491:

Wiki page [cwal_gc] by stephan 2014-08-04 20:44:38.
D 2014-08-04T20:44:38.592
L cwal_gc
P 29967680d0b1fbe5d23f0f635b23b9e114bfafe5
U stephan
W 9312
See also: [DataTypes], [MemoryModel]

Notes about how garbage collection works in cwal...

A core requirement for me in a "personal scripting engine" is deterministic destruction (though the ordering need not always be well-defined), strongly leaning towards C++-style scoping rules. Ideally, this would also guaranty  that values get finalized in the opposite order of their creation, but it turns out that doing so would require more infrastructure than i'd like to pay for.

<strong>First, an overview of the data model:</strong> "Values" are opaque handles which refer to various concrete types like numbers, objects, arrays, strings, etc. It is strongly modeled after JSON/ECMAScript types (and in fact originated as a fork of the [http://fossil.wanderinghorse.net/wikis/cson/|cson JSON library]). Clients can ask a value what type it is and perform other operations. Some types have a higher-level representation (strings, objects, and arrays) and such a value handle can be converted freely between its opaque concrete handle type. As far as the engine is primarily concerned, however, they are all generic Values. Internally the engine tends to separate values into "atoms" and "containers." The former is any type which cannot contain child values and the latter includes arrays and objects. More details about the Value system can be found in the [MemoryModel] and [DataTypes] pages.

When a value is created (via concrete-type-specific factory functions) it initially belongs to the scope which is active when it is instantiated. "Belongs to" in this sense is a bit misleading, but is also not entirely incorrect, so we'll simplify here for clarity.

The first tool cwal uses for figuring out when to destroy a value is a conventional reference count. Reference counting is easy to implement, works well, and allows many operations to perform more efficiently by requiring fewer allocations. As anyone familiar with the problem domain probably knows, reference counting breaks down somewhat when it comes to <em>graphs</em> - data structures which can form endless loops via references which lead back to the same container.

cwal has experimented with two solutions for catching/breaking cycles during destruction. The first one used some trickery of reference counting so that one instance would fool consecutive destructions (via cycles) into doing nothing, all the while fudging his reference count to make sure that he was the one in control of the final destruction. This was relatively cheap to implement and performed reasonably well, but it only worked in limited circumstances and when the client participated by asking a value to check itself for cycles before destroying it.

So on to plan B...

i've had this idea for a couple years that we can bypass a lot of this mess using a simple mechanism: when a value is created, it belongs to the currently active scope. If the value is later referenced from a higher (older) scope, it is moved into that scope for parenting purposes. A value is never "down-scoped" (moved to a newer/lower-level scope), only up-scoped (to an older/higher-level scope). As variables are assigned, values are inserted into containers, etc., a value's ownership could always migrate to the highest-level scope from which it is ever referenced ("referenced" = "increment the reference count of it or one of its container parents"). When it comes time to clean up, if no container ends up cleaning the value, its owning scope will clean it up.

Plan B costs me a scope pointer per value, which i really hate (and is the reason i didn't try it first). But... having the scope in the base value interface allowed us to remove/consolidate some code which otherwise has to distinguish between containers and atoms. It also (it turns out) costs less memory than the arrays we had been using to track ownership, and inherently removed several out-of-memory corners from which we had no recovery strategy. So it was a win overall.

When a scope cleans up, it works like this:

Traverse its list of owned values and reduce their refcount by 1. Doing so will, if the refcount drops to 0 (or is already 0, which is the case for new, as-yet-unreferenced values (known as "probationary" values)), indirectly remove the value from the scope's list (an O(1) operation). If that value is a container, it might clean up other values, a side-effect of which is that the values are removed from their parent scope (most likely the one being cleaned up now). If, at the end of the loop, the list still has entries, repeat this process... again and again, until it has no more entries.

What this does is incrementally free up entries and any cycles they participate in. The scope-following mechanism ensures that if a value is being cleaned up by a scope then there can be no other reference to it in a higher-level scope (and all lower-level scopes have been destroyed by this point). By traversing the list repeatedly, we end up weeding out cycles a layer at a time. The core-most value-finalizer function catches nested finalizations caused by cycles and Does the Right Thing(s) to avoid freeing a value more than once. We also use a delayed destruction mechanism here to avoid that such a cycle steps on a just-freed cycle gang member (which does happen, but is harmless because of the delayed-gc queue).

Trivia: because cleaning up a value removes it from its scope list, the iteration described above is not possible (we can end up traversing the list into the recycle bin or gc queue). The algorithm <em>actually</em> simply unrefs the <em>head</em> of the list continually until its list has no head. Unreferencing can modify or remove the head of the list, so we have to just keep unref'ing the head until there is no more head.

Assuming i can get the value-scope parent/child relationships correct, i believe this approach can handle fairly insane graphs for the low, low cost of a slow-but-sure algorithm. (And it's not <em>that</em> slow since its list manipulation handling changed from arrays to linked lists. The algos are now O(1) and have no extra memory costs.) Testing has proven it to work so far :).

There are cases where a value might get "stranded" for unduly long in a high-level scope, but that case can certainly be handled once the library is far enough along to test this more. The important thing for me, however, is that everything gets cleaned when the appropriate scope is popped. i.e. deterministic finalization (though the exact order of finalization cannot be strictly defined for some cases). i would also like the option of manual management, where there is no scope (or values are handled outside of any scope), but that is of secondary concern (and not yet handled by the API).

How well does this actually work? So far, so good, but much more experimentation is needed before i can claim any sort of victory over cycles. It has been shown to handle several levels of scope direction between objects, with references going in both directions and entries criss-crossing along the way. For example, the case of an Object which contains a single key/value pair (property): (itself, itself). How to clean that up properly is an interesting exercise, and cwal handles it correctly.

One of the tricks we use to avoid stomping a destroyed/under-destruction object when we encounter cycles is to temporarily "fake" the destruction of containers if they are destroyed during a scope-pop. A free'd container will, instead of going into the recycle bin or be immediately freed, go into a delayed-destruction queue. This allows is to be safely referenced (despite its refcount of 0) throughout the destruction run. When the scope has finished popping, the free-queue is pushed into the recycling bin or (if that's full or turned off) it is freed. Non-containers do not need to handle the post-death-touching case, since they cannot participate in cycles. This queue is managed using the linked-list which Values provide, so it costs us no additional memory and insertion of new entries is O(1). i.e. this costs us no extra memory, very little time, and scales to an arbitrary number of values with no performance hit on the core list ops.

Regarding reference counting:

When a new Value is created, its initial reference count is 0, not 1. A value with a refcount of 0 is called "probationary", and may be cleaned up at nearly any arbitrary time by the cwal engine. The current rule is: any call into the API which takes a <tt>cwal_engine</tt> pointer argument has the <em>potential/permission</em> to clean up probationary values. Values which are referenced once, either via a call to <tt>cwal_value_ref()</tt> or by inserting the value into a container (which itself may be probationary!), they are moved out of probationary status and not subject to arbitrary sweeping. The "sweep" operation can be triggered by the client and will immediately clean up any probationary values in the current scope. Probationary values are kept in a separate list from "normal" values, so cleanup is a simple O(N) operation, where N is the number of probationary values. When a value's reference count reaches 1 for the first time, it is moved out of the probationary state and into a "longer lived" status (and another list).

Z 0d9659af25edd240d1143dc0df3b3fed