The one main reason for Cassandra is speed. However, Cassandra lacks the usual SQL connectivity capabilities (i.e. why we call SQL databases relational.) In other words, an SQL statement can be used to connect many tables together without the need for said data to be otherwise connected is powerful but not available in Cassandra.
In an SQL database, it is customary to extend a table by creating a new table and having a one to one connection. For example, a User table is likely to offer a user identifier (user_id) column. You can now create an extension to add an address for the user. That extension creates a new table, User Address, and includes a column named user_id which is a one to one link to the main User table.
In this case the two tables have a one to one relationship. If there is one user in the User table then there is one corresponding entry in the User Address table (if allowed by your application, a user who did not specify his address does not have a User Address entry, in which case it is considered NULL. It is still a One to One connection.)
This case is the easiest to handle in Cassandra: you just add the address columns directly in the User table. This has no side effects (although by default, assuming you don't force the addition of the address columns, it will be considered NULL.)
One potential problem is to end up with two plug-ins creating columns with the same name. In Snap! Websites we ask users to prepend their plug-in name to their column names. So for example, the "page" plug-in wants a layout definition, it could use a column named "page_layout"1.
This case does not have any internal solution, you have to do it manually.
Note that Cassandra does not offer a serial type2, however the name (key) of the row already has to be unique. Thus, the name of the row is your serial type value.
A one to many can therefore be created using the row name. So the other table may include one column which is set to the name of the row.
In our User table example, we now want to offer an Address book. This address book is a one (user) to many (addresses.) The Address table includes one column which is the user name.
You can also define this the other way around with a column that you manage as an array of address references. So the column could be a text file with the row name of the address that is supported here. You could also include a type (i.e. preferred address, bank address, preferred theater, etc.)
There is an example how the column could look like:
user_addresses ----------------- *,home_address B,bank_address T,theater_address ...
This technique should only be used for smaller many to one cases. If the number of items can grow to very large numbers, using a separate table is wiser.
At times, it is useful to have a many to many connection. In SQL, this is generally archived using one extra table which lists pairs of identifiers, one identifier for table A and one identifier for table B. Now you can link multiple columns from table A with multiple columns of table B.
Coming back to our User and Address example, we could create an Address table that can be shared between users. For example, a user has a home address, maybe multiple people live at the same address (a family), a business address, each employee work at the same address, a bank address, where different people bank at the same address.
In this case, Cassandra can manages the tables the same way as SQL: we want a User table and an Address table that are independent. To link a user to an address, we use an intermediate table which has two columns with row names: the user row name and the address row name. It may include a third column with the reason for the link (i.e. home, bank, favorite cinema, etc.)
The other way to handle this case in Cassandra is to create a new column in the User table and a new column in the Address table. These two new columns are vectors of Address row names and User row names respectively. Assuming the row names all have the same length, these columns are a simple C arrays. Something like this:
typedef char row_name; typedef row_name row_names_ary;
This is obviously not too dynamic, but it should give you an idea. Assuming the name of a row has to be ASCII, you could use the \n character as a separator and your columns would then become a text file or one row name per line.
As mentioned in the many to one case, the array technique should only be used when you have a known, relatively small number of references that will appear in such columns. I would also add that for search purposes, the additional table is generally a much better bet than having to load a complex column.
Newer versions of Cassandra support a simple indexing mechanism. They use the same thing as you'd use to create an index: an index table is a set of rows and columns used only for the purpose of indexing another table content.
By handling your own indexing, you can get fancy (i.e. index on multiple columns, tweak the data before inserting it in the index, etc.) However, you have to handle the index yourself. This means making sure that the index is updated each time you insert a new column and if changing the indexed data, updating the corresponding index entry.
For efficiency of our Snap! lists, we most certainly want to handle our own indexes and since we'd do that, we should certain look into doing so for all the data we handle using the same mechanism.
Since Cassandra 0.8.0, counters are available in Cassandra. There are several reasons why we'd want to use counters, however, remember that these are not atomic (they are consistent, just not atomic.)
Counters are useful to count the number of times a user accessed a page, the number of times a page was edited, the number of times you deleted content, etc. All sorts of statistics can be managed with those.
As an analogy, you can think of all the objects saved inside Cassandra as similar to C++ objects. A C++ object is not connected with any other C++ object unless you add a pointer in said C++ object. That pointer represents a connection. If you need a vector of pointers, you get a Many to One or Many to Many type of connection.
The Cassandra system allows for any number of tables, rows, cells (a cell has a column name, but when you save a value in a column in a row, it really is a cell.) These are accessed using a very simple syntax such as:
cluster["context"]["table"][QString("row")][QString("column")] = value; value = cluster["context"]["table"][QString("row")][QString("column")];
Table names are limited to letters, digits and underscores. Row and column names are not limited, it can be anything and up to 64Kb. The cell value can be binary up to 2Gb, although to handle 2Gb of data with Cassandra you probably would need 8Gb of RAM just and only to read ONE such cell. We want to limit our data to just 64Mb or so and avoid the huge burden on the RAM, especially if we want multiple such processes to run simultaneously.
Using the libQtCassandra library to access the Cassandra environment, we create QCassandraValue objects that hold the data we want to save and the data we read from Cassandra. The value object can very easily be used with all basic C types, QString and QByteArray:
In other words we can create columns that are based on all of these types of values without having to handle our own serialization. The data is saved in big endian which allows for binary comparison and thus sorting.
The Null type is actually equivalent to the empty string or byte array (no data at all.)
QStrings are handled as UTF-8 values for internationalization.
The QByteArray type is used whenever a binary block of data needs to be saved.
For example, when saving an image in a page, we use a QByteArray for the image itself.
Snap! Websites save all their data in the Cassandra database. This is very important since the Cassandra system is used to replicate the data3 on all the nodes so any one node can return the data and regenerate the entire page.
Although such files should be saved in static data folders on each Apache server for extremely fast retrieval, this is only done when the data is required on that computer and only when the data is public.
More information about caching is defined in the page.