ACHTUNG: THIS SITE'S ENTIRE WIKI IS NOW MAINTAINED IN THE NEW WIKI: http://fossil.wanderinghorse.net/wikis/cson/. What that means is that the wiki documentation on this site is now considered unmaintained. However, this site is still cson's home for purposes other than the wiki.
Welcome to cson, a C API for working with JSON data
cson (pronounced "season") is a C library providing an object-oriented API for working with JSON data. cson is the genetic step-brother (if that makes any sense) of libnosjob, and its overall model is similar to that library's.
Code State: seems to work as advertised/documented.
License: the core library is released under a dual Public Domain/MIT license. The underlying parser code has a BSD-ish "keep this copyright in place" license with a "do no evil" subclause. (So you evildoers out there must replace the underlying parser before using this code. Optionally, you evildoers may use the output-only parts of this library as you wish, as those parts are not hindered by the "do no evil" clause.)
Author: Stephan Beal
Downloading: See the download page.
This web site is a Fossil source code repository, containing the source code, a wiki, bug tracker, etc., for this project. To be able to download the code or use most of the hyperlinks on this site, you must click the /login link and log in as "anonymous" with the password shown on the screen. This is to avoid that bots download every version of every file in the repository, or traverse the whole history of every source file.
- High-level, object-oriented C API for generating and consuming JSON data.
- Compiles cleanly in C89 and C99 on 32- and 64-bit platforms.
- Gets its inputs from, or sends it outputs to, arbitrary sources/destinations using callbacks. It includes implementations for handling FILE and string input/output, but can easily be extended to support custom sources/destinations. 3rd-party i/o APIs can be wrapped this way by implementing two small functions (one for input, one for output). I/O is streamed, as opposed to being fully buffered.
- Can report the nature and exact position of parse errors when reading JSON.
- Accepts input in ASCII, UTF8, and (theoretically, anyway) UTF16. It internally uses, and outputs, UTF8.
- And it's fairly well documented. More than 30% of the code is API documentation and this site's wiki contains a good deal of information about how to use it.
- The optional cson_session API facilitates the storage/loading of persistent application session data in JSON format. Out of the box it supports file-based and database-based persistence (sqlite3 and MySQL).
- cson_cgi is an optional API extension which simplifies the creation of JSON-centric CGI applications.
- The optional cson_sqlite3 and cson_cpdo extensions allow clients to easily export JSON data from sqlite3 and MySQL databases.
- Objects/Arrays must not form cycles (circular references), no matter how deeply. Doing so will lead to endless loops, memory corruption, crashes, and/or leaks.
- When using high-precision floating point values, some precision might get lost along the way. We use the libc defaults, which generally offer 6 decimal places of precision. We currently have no special support for the "infinity" and "NaN" (not-a-number) values.
- In order to keep ownership of underlying memory reasonably manageable, it has to malloc() quite often (compared to my other C libraries), almost always in small amounts (typically under 32 bytes on 64-bit builds, less on 32-bit builds). It doesn't allocate any more (and possibly much less) than a typical C++ JSON library using the STL, though.
- Because it represents JSON as object trees, and is designed to input/output such trees, it cannot be used to stream arbitrarily large JSON data. In other words, its memory costs are proportional to the size of the input/output JSON trees. That said, reading and writing JSON is stream-based, and the input/output is not buffered in memory (per se, though the object trees could be considered to be a form of buffering). In the general case JSON is not, due to its tree structure, suitable for streaming arbitrarily large amounts of data.
- Parsing JSON requires a good deal of memory, and i'm not certain which parts can be further optimized. In my mid-sized test data sets (with a mixture of data types) i'm averaging about 76 bytes per key/value pair taken from parsed input (on 64-bit builds), including the infrastructure, e.g. lists, for managing that data, parser-side buffering, etc. That isn't all that bad, considering all that's going on, but is seems a bit high to me. JSON which is full of mostly integers takes less than half (on average) that when the "large void pointer" optimization is enabled (64-bit only). On 32-bit builds we get a significant memory saving out of the box because the pointers only cost half as many bytes.
See the News page.