QtCassandra::QCassandraRowPredicate row_predicate; row_predicate.setCount(100); ... QtCassandra::QCassandraColumnRangePredicate column_predicate; column_predicate.setCount(100); ...
The Cassandra system allows you to read an array of rows or columns. This is done by a special query command sent to the database system.
The libQtCassandra library offers predicate classes giving you the ability to read a set of rows or columns all at once (see example above.) In general, reading more at once is better because it gives you a faster transfer rate to get one large block over the network rather than many small blocks.
So... why are we reading just 100 rows or columns at a time?
The main answer is: because of memory.
It may look like nothing, but row and column keys can be really large. Reading many more rows or columns could require really large buffers. Now you may think, with computers having 1Tb of memory, we should be just fine... Well! The fact is that our primary idea for our backends is to be able to run them on tiny computers: VPNs with as little as 512Mb and one CPU.
There are several other reasons that can help us in this regard: if the computer has to be restarted (i.e. you just upgraded the kernel) then you want to stop all the currently running processes as quickly as possible. Some processes check for a STOP event on every iteration. In other cases, we run quickly over a set of rows or columns, then check for the STOP event. So a smaller number saves us some time (another way would be to have a sub-counter in the loop... making the code more complicated.)
Another reason to only deal with 100 rows or columns at a time is to pace our accesses to the database. Yes, Cassandra is really fast and you can always add more nodes. However, the truth is that really large requests will eventually block other accesses. 100 rows or columns is already a pretty large number to deal with for the database cluster. It takes some time for it to gather all the information and return it to you.