Cassandra seems to start but it's not running

When starting cassandra, it starts in the background, if it crashes or fails in some way, it may write the information on your screen, and it may not. If you setup the logs properly, it will be in the logs. However, until you get all those things right, you may have trouble starting Cassandra.

First of all, you can use the -f flag to start cassandra in the foreground, like this:

.../bin/cassandra -f

That way you should get the error information on your screen. From there you should understand what's wrong.

In my case, I had a first problem which was a crash. This was due to a stack that was too small.

Today I experienced another problem as I was setting up a cluster and I wasn't sure what the problem was... using -f gave me the answer, I had an invalid IP address. Copying configuration files around nodes generates such an effect.

The listen and the rcp_address parameter probably always use the same IP address. At least in my case they are. Just in case those are defined in conf/cassandra.yaml (where you put other addresses too.)

You may also want to look at the nodetool to check your ring of clusters. If a node is not visible, it will be missing in that list. The connection could be broken because a node has a firewall not allowing Cassandra. You want to open the 9160 and 7000 ports if you use the defaults. 9160 is to access the data and the 7000 is to communicate between nodes.

./nodetool -h ring

Once all the nodes are visible, the cluster is fully functional. However, keep in mind that the keyspaces replicate only if you asked the cluster to do so. This is done by changing the replication factor to the proper value (which very much depends on your cluster setup.) To change the settings use the following command:

UPDATE KEYSPACE my_cluster WITH strategy_options = {cluster_name:3};

The name "my_cluster" is a keyspace. Each keyspace can have a different strategy, but if you want to replicate them all, make sure to use that command on all of them. The "cluster_name" comes from the in my case. I'm not sure what the default would be if you did not edit the topology by hand.

Once you ran that command, you must make sure to update all the clusters. This is done by running the repair command:

./nodetool -h repair my_cluster

The replication may be set to 3 if you have 4 or 5 nodes, for example (it should be around (n / 2) + 1). If you have 3 or less node, use the number of node for better backup capabilities.

Snap! Websites
An Open Source CMS System in C++

Contact Us Directly