Showing posts with label md5sum. Show all posts
Showing posts with label md5sum. Show all posts

Saturday, April 12, 2014

Learn and play with cassandra 2.0.6 snapshot and restore

Snapshot of cassandra appearing as early as cassandra version 0.4.0 beta. Today, we are going to learn on cassandra snapshot. Note that if you run snapshot on a node in a cluster, it only snapshot on that node. If you want to snapshot for all nodes in a cluster,

it is much more efficient to use a parallel ssh such as clusterssh or pssh.

Fundamentally, when snapshot is executed, it copy the sstables into a snapshot directory. So be notice that if you have a huge node load, it require two times the disk space of that server and it may spike the I/O activity on that node too if large amount of sstables is being snapshot.

Let's get down to the work.

First, ensure at least the table (column family) has data.
cqlsh:jw_schema1> select * from users;

user_id | age | first | last | middle
---------+-----+-------+----------+--------
3 | 34 | john | smith | a
2 | 35 | olee | smith | b
1 | 33 | dan | bar | c

(3 rows)

Then take a snapshot, for instance, I only take a snapshot of this keyspace, jw_schema1 and table users. What that does is that, cassandra will flush the data to sstable before snapshot is taken. For option such as giving a meaningful snapshot a name, check out command nodetool help.
jason@localhost:~$ nodetool -h localhost snapshot jw_schema1 -cf users
Requested creating snapshot for: jw_schema1 and table: users
Snapshot directory: 1397292720524

The snapshot made will be stored at <data_file_directories> that you set in cassandra.yaml file. So for instance,
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ ls -l snapshots/1397292720524/
total 96K
-rw-r--r-- 2 cassandra cassandra 16 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-Filter.db
-rw-r--r-- 2 cassandra cassandra 54 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-Index.db
-rw-r--r-- 2 cassandra cassandra 76 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-Data.db
-rw-r--r-- 2 cassandra cassandra 43 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-CompressionInfo.db
-rw-r--r-- 2 cassandra cassandra 4.3K Apr 12 16:52 jw_schema1-users.idxAge-jb-1-Statistics.db
-rw-r--r-- 2 cassandra cassandra 79 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-TOC.txt
-rw-r--r-- 2 cassandra cassandra 68 Apr 12 16:52 jw_schema1-users.idxAge-jb-1-Summary.db
-rw-r--r-- 2 cassandra cassandra 16 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-Filter.db
-rw-r--r-- 2 cassandra cassandra 58 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-Index.db
-rw-r--r-- 2 cassandra cassandra 87 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-Data.db
-rw-r--r-- 2 cassandra cassandra 43 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-CompressionInfo.db
-rw-r--r-- 2 cassandra cassandra 4.3K Apr 12 16:52 jw_schema1-users.idxLast-jb-1-Statistics.db
-rw-r--r-- 2 cassandra cassandra 79 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-TOC.txt
-rw-r--r-- 2 cassandra cassandra 75 Apr 12 16:52 jw_schema1-users.idxLast-jb-1-Summary.db
-rw-r--r-- 2 cassandra cassandra 16 Apr 12 16:52 jw_schema1-users-jb-1-Filter.db
-rw-r--r-- 2 cassandra cassandra 45 Apr 12 16:52 jw_schema1-users-jb-1-Index.db
-rw-r--r-- 2 cassandra cassandra 206 Apr 12 16:52 jw_schema1-users-jb-1-Data.db
-rw-r--r-- 2 cassandra cassandra 43 Apr 12 16:52 jw_schema1-users-jb-1-CompressionInfo.db
-rw-r--r-- 2 cassandra cassandra 4.3K Apr 12 16:52 jw_schema1-users-jb-1-Statistics.db
-rw-r--r-- 2 cassandra cassandra 79 Apr 12 16:52 jw_schema1-users-jb-1-TOC.txt
-rw-r--r-- 2 cassandra cassandra 59 Apr 12 16:52 jw_schema1-users-jb-1-Summary.db

If you md5sum on the data files between snapshot and the live data, they are identically match.
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ md5sum snapshots/1397292720524/*Data*
3d4351d714500417c74de6811b1eae3b snapshots/1397292720524/jw_schema1-users.idxAge-jb-1-Data.db
a430a2d65c0a504fe3ab06344654a89a snapshots/1397292720524/jw_schema1-users.idxLast-jb-1-Data.db
13798e1ffb5ed6a871d768399f54b125 snapshots/1397292720524/jw_schema1-users-jb-1-Data.db
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ md5sum *Data*
3d4351d714500417c74de6811b1eae3b jw_schema1-users.idxAge-jb-1-Data.db
a430a2d65c0a504fe3ab06344654a89a jw_schema1-users.idxLast-jb-1-Data.db
13798e1ffb5ed6a871d768399f54b125 jw_schema1-users-jb-1-Data.db

A snapshot made is not meaningful if you cannot restore back to the node. So from this point on ward, we will take a look on how to restore the snapshot back into the node.

Surprisingly, to restore the command, you would expect for example, nodetool restore backup, but it is not. Rather, there are a few ways to restore the given snapshot sstables.

  1. You can use sstableloader,

  2. copy the sstables into <data_file_directories>/data/jw_schema1/users/ and refresh by calling loadNewSSTables via jconsole or using nodetool refresh

  3. use a node restart method.


It sounds like a lot of works to use either of the first two methods, I'm gonna just try on the last method way of restoring the snapshot sstables.

In order to simulate our backup will be successful, we are going to do a few simulation (disk failure, accidentally delete) here.

  1. copy the snapshot backup somewhere else.

  2. shutdown cassandra and delete cassandra directory.


Okay, let's continue the setup simulation environment
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ cp -r snapshots/ ~/cassandra/
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$

jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ sudo /etc/init.d/cassandra stop
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$

jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ cd /var/lib/cassandra/commitlog/
jason@localhost:/var/lib/cassandra/commitlog$ ls
total 2.6M
-rw-r--r-- 1 cassandra cassandra 32M Apr 11 18:52 CommitLog-3-1397213531634.log
-rw-r--r-- 1 cassandra cassandra 32M Apr 12 18:38 CommitLog-3-1397213531633.log
jason@localhost:/var/lib/cassandra/commitlog$ sudo rm -rf *
jason@localhost:/var/lib/cassandra/commitlog$ cd ../data/jw_schema1/users/
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ sudo rm -rf *
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$

So we have copied the snapshot to a cassandra directory under home directory and also stop cassandra, remove all commitlog and table users in keyspace jw_schema1. Note that in this case, the schema for table users is still exists as the schema is stored in the system keyspace.

And now we will copy the snapshot from home directory back into cassandra.
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ sudo cp -r ~/cassandra/snapshots/1397292720524/jw_schema1-users* .
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ ls
total 96K
-rw-r--r-- 1 root root 76 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-Data.db
-rw-r--r-- 1 root root 43 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-CompressionInfo.db
-rw-r--r-- 1 root root 16 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-Filter.db
-rw-r--r-- 1 root root 54 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-Index.db
-rw-r--r-- 1 root root 4.3K Apr 12 18:50 jw_schema1-users.idxAge-jb-1-Statistics.db
-rw-r--r-- 1 root root 68 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-Summary.db
-rw-r--r-- 1 root root 79 Apr 12 18:50 jw_schema1-users.idxAge-jb-1-TOC.txt
-rw-r--r-- 1 root root 43 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-CompressionInfo.db
-rw-r--r-- 1 root root 87 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-Data.db
-rw-r--r-- 1 root root 16 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-Filter.db
-rw-r--r-- 1 root root 58 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-Index.db
-rw-r--r-- 1 root root 4.3K Apr 12 18:50 jw_schema1-users.idxLast-jb-1-Statistics.db
-rw-r--r-- 1 root root 79 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-TOC.txt
-rw-r--r-- 1 root root 75 Apr 12 18:50 jw_schema1-users.idxLast-jb-1-Summary.db
-rw-r--r-- 1 root root 43 Apr 12 18:50 jw_schema1-users-jb-1-CompressionInfo.db
-rw-r--r-- 1 root root 206 Apr 12 18:50 jw_schema1-users-jb-1-Data.db
-rw-r--r-- 1 root root 4.3K Apr 12 18:50 jw_schema1-users-jb-1-Statistics.db
-rw-r--r-- 1 root root 45 Apr 12 18:50 jw_schema1-users-jb-1-Index.db
-rw-r--r-- 1 root root 16 Apr 12 18:50 jw_schema1-users-jb-1-Filter.db
-rw-r--r-- 1 root root 79 Apr 12 18:50 jw_schema1-users-jb-1-TOC.txt
-rw-r--r-- 1 root root 59 Apr 12 18:50 jw_schema1-users-jb-1-Summary.db

So far it looks good, now if you tail the cassandra system.log and start cassandra, notice that the sstables are being read. If within these down time, data supposed to be own by this node is missed, you should by now run nodetool repair to make sure data is sync.
 INFO [main] 2014-04-12 18:52:32,555 ColumnFamilyStore.java (line 254) Initializing jw_schema1.users
INFO [SSTableBatchOpen:1] 2014-04-12 18:52:32,568 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/jw_schema1/users/jw_schema1-users-jb-1 (206 bytes)
INFO [main] 2014-04-12 18:52:32,701 ColumnFamilyStore.java (line 254) Initializing jw_schema1.users.idxLast
INFO [SSTableBatchOpen:1] 2014-04-12 18:52:32,719 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/jw_schema1/users/jw_schema1-users.idxLast-jb-1 (87 bytes)
INFO [main] 2014-04-12 18:52:32,802 ColumnFamilyStore.java (line 254) Initializing jw_schema1.users.idxAge
INFO [SSTableBatchOpen:1] 2014-04-12 18:52:32,810 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/jw_schema1/users/jw_schema1-users.idxAge-jb-1 (76 bytes)

jason@localhost:~/$ nodetool -h localhost repair jw_schema1 users
[2014-04-12 18:59:57,477] Starting repair command #1, repairing 1280 ranges for keyspace jw_schema1
..
[2014-04-12 19:00:50,800] Repair command #1 finished

Now we will check our data if it is still there.
jason@localhost:~/$ cqlsh 192.168.0.2 9160 -k jw_schema1
Connected to just4fun at 192.168.0.2:9160.
[cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh:jw_schema1> select * from users;

user_id | age | first | last | middle
---------+-----+-------+----------+--------
3 | 34 | john | smith | a
2 | 35 | olee | smith | b
1 | 33 | dan | bar | c
(3 rows)

cqlsh:jw_schema1>

All good. :)

In my humble opinion, because cassandra is built with durable and fault tolerant in mind, snapshot is rather not actually needed. Sure, it is fair to argue if someone deleted the data accidentally, but if you can prevent that by blocking from front end, you can actually save a lot of cost in term of cluster backup and restore maintenance cost. If you want to really ensure the data is save, spin up another cluster in another data centre, then the data is guaranteed safe from disaster. But hey, no harm learning a new tools in case you might need it later down the road.