cson  cson_sessmgr_whio_epfs

ACHTUNG: THIS PAGE IS NOW MAINTAINED IN THE NEW WIKI: http://whiki.wanderinghorse.net/wikis/cson/?page=cson_sessmgr_whio_epfs

See also: cson_session, cson_sessmgr_cpdo, cson_sessmgr_whio_ht, cson_sessmgr_file

Embedded Fileystem-based cson_session Storage

(Added 20110421.)

The "whio_epfs" session manager is not compiled in by default, but if the library is built with it then this session manager provides session persistence via a an "embedded filesystem container file" (EFS, for short). Multiple processes may use the EFS - it uses fcntl()-style locking, if enabled when the library is built and the underlying storage seems to support it. The session IDs are the internal filenames and the JSON session data are the file contents. It is inherently much slower than cson_sessmgr_whio_ht but "manual" management of individual sessions is much simpler compared to that session manager.

The configuration JSON object to be passed to cson_sessmgr_load("whio_epfs",...) looks like:

   "file": "/path/to/file.whio_epfs"

The EFS file must already exist and be writable by the session-using process. The EFS files can be created programmatically using the whio_epfs API or using the command-line tool whio-epfs-mkfs.

For example:

~> whio-epfs-mkfs sessions.whio_epfs \
    --inode-count=4096 \
    --block-size=16384 \
    --namer=ht \
    --label='cson_session storage'

That will create sessions.whio_epfs with a capacity for (--inode-count - 1) sessions (1 inode is used internally). The sessions may grow arbitrarily large, and the EFS will expand as necessary. We can set a maximum number of data blocks for the EFS by specifying the --block-count=# parameter. With that parameter, we can optionally force the EFS to zero-fill all available space in advance with the --fill-blocks flag. Normally the EFS grows automatically up to its limit (default is no inherent limit other than the allowable numeric ranges), but this approach ensures that the EFS file size never changes.

The sessions are stored as pseudofiles within the EFS and can be manipulated using the whio_epfs C API or using the various whio_epfs tools.

For example:

~> whio-epfs-ls sessions.whio_epfs 
whio_efps container file [sessions.whio_epfs]:
Label:	[cson_session storage]
Inode #	   Size	  Mod. Time (Local)	Name
     1    35219   2011-04-21 15:04:02	<([whio_epfs_namer_ht.whio_ht])>
     2       59   2011-04-21 15:04:04	abacab

Totals: 2 of 4096 inodes take up 35278 bytes.

That oddly-named inode #1 holds the internal list of inode names. Clients MUST NOT remove or directly modify the contents of that inode, or the EFS will effectively be corrupted. The name for inode #2 is the session ID used in the test application for the session manager.

The modification time of each entry will always be the last time it was saved, and this can be used for determining when to remove a given entry. Removing entries frees up their data blocks for use by other sessions.

We can export, peruse, or modify the EPFS-held data (except for the contents of inode #1, as mentioned above), using the EPFS API or the various EPFS command-line tools, like so:

~> whio-epfs-cp sessions.whio_epfs -x 2=-
{"timestamp":1303394644, "hits":3, "sessionManager":"epfs"}

Pedantic note: session data are normally small, which implies that we should build the EFS containers with a small block size (say 4-8kb). However, the inode-to-name mappings are (in this use case) stored as a pseudofile within the EFS itself. That file (inode #1 in the above examples) has a size proportional to the maximum inode count, the number of named inodes (inode entries do not require names in this API), and the lengths of those names. i.e. it can grow relatively large and should (for performance reasons) span as few storage blocks as possible. It can live with a 2kb block size, but performance will be hurt in that case.