cwal

cwal_weak_ref
Login

cwal_weak_ref

cwal_weakref

The cwal_weak_ref class, and all related APIs, was renamed to cwal_weakref for API consistency with the new cwal_propref, and ease of typing, on 2022-04-03.

The cwal_weakref type (hereafter called CWR or "weak ref") allows a client to take a "weak" reference to a value. This does not change the reference count of the value, but when the value is destroyed (as it eventually will be) then this reference is invalidated, such that the user can determine that the value it "weakly" points to has been destroyed. A simple example:

cwal_engine * e = ...;
cwal_value * v = cwal_new_integer( e, 42 );
cwal_value_ref(v);
cwal_weakref * r = cwal_weakref_new(v);
assert(v == cwal_weakref_value(r));
cwal_value_unref(v);
assert(!cwal_weakref_value(r));
...
cwal_weakref_free(e, r);

The implementation invalidates all weak references to a value at the moment the value is cleaned up, as opposed to checking "is this address valid" when cwal_weakref_value() is called. The latter would have the subtle problem that re-allocating values at the same addresses would eventually lead it to Do The Wrong Thing. The overhead is minimal. For non-weak-referenced values there is one additional O(1) search operation at cleanup and for weak-referenced ones there is an additional O(N) cost (N=number of all weak references to values of the same type), but that one is a fast loop.

The main reason for wanting a weak reference is when two (or more) script-bound native values have a mutual relationship and need to know if their partner is valid. This is particularly important when GC destroys them, as GC may destroy one side or the other of the partnership first. This can be problematic when dealing with parent/child or database/statement relationships, and weak references provide a way for one endpoint to safely determine if the other end is still alive.

Unlike values, weak references are not owned by a scope. They are effectively global for a given cwal_engine instance. They have no effect on the lifetimes of the values they point to, they simply act as a flag which gets cleared when the pointed-to value is destroyed. If a client fails to free a weak reference, the engine will free it when it is destroyed.

Like values, weak references are recycled by default (this is configurable). This makes freeing and reallocating them cheap and fast.

The API supports weak-referencing arbitrary void pointers this way, but for non-Value types it requires that clients register and deregister weak-referenced memory as that memory becomes available resp. unavailable.

TODOs:

How Weak Refs are Implemented

cwal's engine has data structure (cwal_ptr_table) which acts somewhat like a set from the C++ STL. It is a hash table which contains only keys (which are their own values) in the form of void pointers. The primary use of the structure is to allow us to create arbitrary "memory X belongs to category Y" mappings. String internalization is implemented this way - when it is enabled, new strings are added to this table. When we create a string, we check this table to see if we have a matching string (the hashing works differently for this particular table, but those are boring details). The string itself does not "know" that it is interned, i.e. it has no flag which indicates that. Instead we add the string (without it "knowing") into the pointer table. When a string is cleaned up, it removes its entry (if any) from the interning table.

Weak references are done the same way. We first add a memory address (an arbitrary void pointer) into the table. Then we create a weak reference which wraps that address. When the engine cleans up that memory, it removes that entry from the weak reference table. That process then walks through the active weak references and clears out any which point to that memory. Live weak references are grouped by memory type (integer, object, etc.) so the list of weak refs which needs to be checked/invalidated for any given removal is limited to the number of weak references to values of the same base type. It "is not expected" applications will need more than a few weak references (if any - most apps won't need them at all).

The pointer table itself is structured much differently from a normal hash table. Instead of allocating one hash entry per key/value, it creates pages of pointers (the page size equals the conventional hash table size). So if the table has a hash size of 27, it creates pages big enough for 27 void pointers. Each time there is a collision which spans all current pages, it adds a new page. The hashing algorithm, based on the address of the entries, performs quite well in terms of the number entries it will fit on a page before it has a collision and has to add a new page. i've seen, in some tests, hit rates of 90% in the first page and 99% in the second for certain combinations of table sizes and numbers of entries, but i suspect those were fairly ideal conditions. In any case, the approach of allocating whole pages ends up costing less memory, compared to allocating individual hash table entries, once a table is about 1/3rd full because the memory overhead of allocating individual hash table entries is so high. Allocating whole pages also leads to many fewer total allocations compared to conventional hash tables. In testing using lots of pseudo-random strings, i have never seen more than 7 or 8 pages at a time, but in the "mid-sized scripts" i've checked, 2-3 interning pages seems to be the norm. For small scripts, 1-2 pages is the norm.