Saturday, May 3, 2014

what and why always all time blocked for cassandra pool FlushWriter

FlushWriter                       0         0            941         0                53

If you noticed in a cassandra cluster, I often noticed that the pool FlushWriter all time block always increased while other pool remain 0. So is this that we should concern of?

Snippet from class ColumnFamilyStore:
/*
* maybeSwitchMemtable puts Memtable.getSortedContents on the writer executor. When the write is complete,
* we turn the writer into an SSTableReader and add it to ssTables_ where it is available for reads.
*
* There are two other things that maybeSwitchMemtable does.
* First, it puts the Memtable into memtablesPendingFlush, where it stays until the flush is complete
* and it's been added as an SSTableReader to ssTables_. Second, it adds an entry to commitLogUpdater
* that waits for the flush to complete, then calls onMemtableFlush. This allows multiple flushes
* to happen simultaneously on multicore systems, while still calling onMF in the correct order,
* which is necessary for replay in case of a restart since CommitLog assumes that when onMF is
* called, all data up to the given context has been persisted to SSTables.
*/
private static final ExecutorService flushWriter
= new JMXEnabledThreadPoolExecutor(DatabaseDescriptor.getFlushWriters(),
StageManager.KEEPALIVE,
TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>(DatabaseDescriptor.getFlushQueueSize()),
new NamedThreadFactory("FlushWriter"),
"internal");

Just like other Stage.replicate_on_write, FlushWriter is also an instance of JMXEnabledThreadPoolExecutor, governed by two configuration which you can altered in cassandra.yaml.

  • memtable_flush_writers default based on number of data_file_directories specified.

  • memtable_flush_queue_size default 4


Whenever maybeSwitchMemtable is called, memtable.flushAndSignal() is called within.

Notice that in Memtable.flushAndSignal(), ExecutorService which is extends a few until the construction object JMXEnabledThreadPoolExecutor for pool FlushWriter aforementioned.  So whenever, the task is rejected due to queue full, method rejectedExecution() is triggered  which eventually increase the count by one.

So that's it, hope you get an idea what and why is the all time block for pool FlushWriter is increased, so it should give indication you should altered the parameter for the two configuration in cassandra.yaml file.

Last, if you learned something and would like to contribute back, please visit our donation page. Thank you.

Friday, May 2, 2014

How often is cassandra minor compaction running and what trigger it

There are two types of compactions in cassandra. The minor compaction and the major compaction. Today, we are going to look into minor compaction and to understand when is minor compaction kickstarted.

Following are description snippet when you create column family using cassandra-cli.
- max_compaction_threshold: The maximum number of SSTables allowed before a
minor compaction is forced. Default is 32, setting to 0 disables minor
compactions.

Decreasing this will cause minor compactions to start more frequently and
be less intensive. The min_compaction_threshold and max_compaction_threshold
boundaries are the number of tables Cassandra attempts to merge together at
once.

- min_compaction_threshold: The minimum number of SSTables needed
to start a minor compaction. Default is 4, setting to 0 disables minor
compactions.

Increasing this will cause minor compactions to start less frequently and
be more intensive. The min_compaction_threshold and max_compaction_threshold
boundaries are the number of tables Cassandra attempts to merge together at
once.

So minor compaction is trigger automatically by cassandra and major compaction is trigger manually via nodetool compact. But when and what exactly that trigger minor compaction? That's when we need to trace into the codebase.

Because compaction is performed on the column family, thus the minor compaction is trigger in the class ColumnFamilyStore. Two methods that will submit this object for compaction executor to perform the minor compaction, that is during

Depend on the compaction strategy chosen for the column family, the default SizeTieredCompactionStrategy which extends AbstractCompactionStrategy and in the super class, which started a single thread to perform this background compaction task. It seem that this optional single threaded task run every five minute.

When the mentioned two method trigger, the object ColumnFamilyStore will be submit to the background for the single thread to perform compaction.
/**
* Call this whenever a compaction might be needed on the given columnfamily.
* It's okay to over-call (within reason) since the compactions are single-threaded,
* and if a call is unnecessary, it will just be no-oped in the bucketing phase.
*/
public Future<Integer> submitBackground(final ColumnFamilyStore cfs)
{
Callable<Integer> callable = new Callable<Integer>()
{
public Integer call() throws IOException
{
compactionLock.readLock().lock();
try
{
if (!cfs.isValid())
return 0;

boolean taskExecuted = false;
AbstractCompactionStrategy strategy = cfs.getCompactionStrategy();
List<AbstractCompactionTask> tasks = strategy.getBackgroundTasks(getDefaultGcBefore(cfs));
for (AbstractCompactionTask task : tasks)
{
if (!task.markSSTablesForCompaction())
continue;

taskExecuted = true;
try
{
task.execute(executor);
}
finally
{
task.unmarkSSTables();
}
}
// newly created sstables might have made other compactions eligible
if (taskExecuted)
submitBackground(cfs);
}
finally
{
compactionLock.readLock().unlock();
}
return 0;
}
};
return executor.submit(callable);
}

Notice that when method getBackgroundTasks is called in submitBackground(), the min_compaction_threshold and max_compaction_threshold which you set in the column family is called here to determine if condition min_compaction_threshold is met and max_compaction_threshold.

From the experience, I don't know why datastax does not recommend major compaction via nodetool, maybe because the I/O and heap usage spike and may impair the node request and response but for me, when the node load goes beyond like 500GB, then there maybe be some stale data left in the big sstables, so it might not be a really such a bad idea to kickstart major compaction if the stale data can be removed and bring down the node load.

Last but not least, if you learn something and would like to contribute back, please go to our donation page.

Sunday, April 27, 2014

code study in cassandra compaction 108 and check what is actually gets remove

Last we covered topic such as compaction via jconsole and general study into compaction and what this article is going to focus is, when compaction happened, what happened to the data that is marked as delete, that is the tombstone?

Continue to where we left in previous article, in the method CompactionTask.execute() , snippet below:
AbstractCompactionIterable ci = DatabaseDescriptor.isMultithreadedCompaction()
? new ParallelCompactionIterable(OperationType.COMPACTION, toCompact, controller)
: new CompactionIterable(OperationType.COMPACTION, toCompact, controller);
CloseableIterator<AbstractCompactedRow> iter = ci.iterator();
Iterator<AbstractCompactedRow> nni = Iterators.filter(iter, Predicates.notNull());

calling ci.iterator() return a new Reducer() where this class will perform remove this row from cache and sstable.

protected class Reducer extends MergeIterator.Reducer<IColumnIterator, AbstractCompactedRow>
{
protected final List<SSTableIdentityIterator> rows = new ArrayList<SSTableIdentityIterator>();

public void reduce(IColumnIterator current)
{
rows.add((SSTableIdentityIterator) current);
}

protected AbstractCompactedRow getReduced()
{
assert !rows.isEmpty();

try
{
AbstractCompactedRow compactedRow = controller.getCompactedRow(new ArrayList<SSTableIdentityIterator>(rows));
if (compactedRow.isEmpty())
{
controller.invalidateCachedRow(compactedRow.key);
return null;
}
else
{
// If the raw is cached, we call removeDeleted on it to have/ coherent query returns. However it would look
// like some deleted columns lived longer than gc_grace + compaction. This can also free up big amount of
// memory on long running instances
controller.removeDeletedInCache(compactedRow.key);
}

return compactedRow;
}
finally
{
rows.clear();
if ((row++ % 1000) == 0)
{
long n = 0;
for (SSTableScanner scanner : scanners)
n += scanner.getFilePointer();
bytesRead = n;
throttle.throttle(bytesRead);
}
}
}
}

The logic is similar and below is the logic to remove the expired column from the standard column family.
private static void removeDeletedStandard(ColumnFamily cf, int gcBefore)
{
Iterator<IColumn> iter = cf.iterator();
while (iter.hasNext())
{
IColumn c = iter.next();
ByteBuffer cname = c.name();
// remove columns if
// (a) the column itself is tombstoned or
// (b) the CF is tombstoned and the column is not newer than it
//
// Note that we need the inequality below for case (a) to be strict for expiring columns
// to work correctly -- see the comment in ExpiringColumn.isMarkedForDelete().
if ((c.isMarkedForDelete() && c.getLocalDeletionTime() < gcBefore)
|| c.timestamp() <= cf.getMarkedForDeleteAt())
{
iter.remove();
}
}
}

So that's pretty obvious. columns and rows get remove if the condition is satisfied.

Last but not least, if you are happy reading this and learn something, please remember to donate too.

Saturday, April 26, 2014

study gc parameters in cassandra 1.0.8

Today we are going to study the GC parameter in the file cassandra-env.sh
. Below are the GC parameter extracted from cassandra 1.0.8 environment file cassandra-env.sh . So let's study them one by one what is the parameter means and what can be change.
# GC tuning options
JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC"
JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC"
JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled"
JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=8"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"
JVM_OPTS="$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JVM_OPTS="$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly"

# GC logging options -- uncomment to enable
# JVM_OPTS="$JVM_OPTS -XX:+PrintGCDetails"
# JVM_OPTS="$JVM_OPTS -XX:+PrintGCDateStamps"
# JVM_OPTS="$JVM_OPTS -XX:+PrintHeapAtGC"
# JVM_OPTS="$JVM_OPTS -XX:+PrintTenuringDistribution"
# JVM_OPTS="$JVM_OPTS -XX:+PrintGCApplicationStoppedTime"
# JVM_OPTS="$JVM_OPTS -XX:+PrintPromotionFailure"
# JVM_OPTS="$JVM_OPTS -XX:PrintFLSStatistics=1"
# JVM_OPTS="$JVM_OPTS -Xloggc:/var/log/cassandra/gc-`date +%s`.log"

-XX:+UseParNewGC

Use parallel algorithm for young space collection.

-XX:+UseConcMarkSweepGC

Use Concurrent Mark-Sweep GC in the old generation

-XX:SurvivorRatio=8

Ratio of eden/survivor space size. The default value is 8

-XX:MaxTenuringThreshold=1

Max value for tenuring threshold.

-XX:CMSInitiatingOccupancyFraction=75

Percentage CMS generation occupancy to start a CMS collection cycle (A negative value means that CMSTirggerRatio is used).

-XX:+UseCMSInitiatingOccupancyOnly

Only use occupancy as a criterion for starting a CMS collection.

 

-XX:+PrintGCDetails

Print more elaborated GC info

-XX:+PrintGCDateStamps

Print date stamps at garbage collection events (e.g. 2011-09-08T14:20:29.557+0400: [GC... )

-XX:+PrintHeapAtGCPrint

heap layout before and after each GC

-XX:+PrintTenuringDistribution

Print detailed demography of young space after each collection

-XX:+PrintGCApplicationStoppedTime

Print the time the application has been stopped

-XX:+PrintPromotionFailure

Print additional diagnostic information following promotion failure


-XX:PrintFLSStatistics=1

Print additional info concerning free lists


-Xloggc:<file>

Redirects GC output to file instead of console

The first part of GC tuning is geared toward which GC strategy to use in cassandra. The second GC tuning is more toward fine tune GC logging example timestamp, heap layaout, etc. If you want to get even more challenging, I end this article by providing a few good links for your further references.

http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html
http://library.blackboard.com/ref/df5b20ed-ce8d-4428-a595-a0091b23dda3/Content/_admin_server_optimize/optimize_non_standard_jvm_arguments.htm

Last but not least, if you are happy reading this and learn something, please remember to donate too.

Friday, April 25, 2014

code dive into cassandra Stage.REPLICATE_ON_WRITE

If you are administrator of a cassandra cluster, sometime you may notice StatusLogger started to flood in cassandra system.log. Example below is the log snippet found in system.log. So what and why this happened? Let us read into the codes.
 INFO [ScheduledTasks:1] 2014-04-17 14:18:00,079 StatusLogger.java (line 65) ReplicateOnWriteStage            17        17         0

StatusLogger started to write about the node thread pools into cassandra system.log under two conditions:

These indications will give an idea that the node is under stress. As you have noticed from system.log, there are many stages involved and with this article, we are going to focus on the metric Stage.REPLICATE_ON_WRITE.

What is replicate on write stage? From the code description, Replicate every counter update from the leader to the follower replicas. Accepts the values true and false. Aside from the code description, we are going to understand this stage by studying into the code.

There are 11 stages involved. When CassandraDaemon class kickstarted, StageManager is called and stages were initialized. Of cause, Stage.REPLICATE_ON_WRITE is one of the stages. An JMXConfigurableThreadPoolExecutor object with configuration 32 threads and 60 seconds keep alive is initialized. When this happened, this object is also registered to MBean server.

Apparently replicate on write stage is only trigger by column family with type counter and the code snippet below is the only code that increment replicate on write metric.
private static Runnable counterWriteTask(final IMutation mutation,
final Collection<InetAddress> targets,
final IWriteResponseHandler responseHandler,
final String localDataCenter,
final ConsistencyLevel consistency_level)
{
return new DroppableRunnable(StorageService.Verb.MUTATION)
{
public void runMayThrow() throws IOException
{
assert mutation instanceof CounterMutation;
final CounterMutation cm = (CounterMutation) mutation;

// apply mutation
cm.apply();
responseHandler.response(null);

// then send to replicas, if any
targets.remove(FBUtilities.getBroadcastAddress());
if (cm.shouldReplicateOnWrite() && !targets.isEmpty())
{
// We do the replication on another stage because it involves a read (see CM.makeReplicationMutation)
// and we want to avoid blocking too much the MUTATION stage
StageManager.getStage(Stage.REPLICATE_ON_WRITE).execute(new DroppableRunnable(StorageService.Verb.READ)
{
public void runMayThrow() throws IOException, TimeoutException
{
// send mutation to other replica
sendToHintedEndpoints(cm.makeReplicationMutation(), targets, responseHandler, localDataCenter, consistency_level);
}
});
}
}
};
}

Whenever ThreadPoolExecutor execute the object DroppableRunner, the task will be execute by a thread in the thread pool executor.

Interface IExecutorMBean exposed three metric:

  • getActiveCount

  • getCompletedTasks

  • getPendingTasks


and interface JMXEnabledThreadPoolExecutorMBean exposed two more metrics:

  • getTotalBlockedTasks

  • getCurrentlyBlockedTasks


StatusLogger.log exposed getActiveCount, getPendingTasks and getCurrentlyBlockedTasks, hence the three columns per stage in the system.log output.

getActiveCount
get active count is actually implemented within class ThreadPoolExecutor. Whenever a worker is running a task, it is consider as an active task and this is consider as one count.

getCompletedTasks
get completed tasks were actually a wrapper to ThreadPoolExecutor.getCompletedTaskCount(). Whenever a worker is finished executed a task, this is consider one count.

getTotalBlockedTasks
when DebuggableThreadPoolExecutor object was initialized, a rejected execution handler is set. Whenever within ThreadPoolExecutor reject a command, rejectedExecution() is trigger and executed. So this translate to one reject is equivalent as one count.

That's about it for this article. When I study into this code and write this article, I get amazed on how this code is structured and it is complex. I would really recommend into study ThreadPoolExecutor.java as cassandra stage reference this code throughout.

Last but not least, if you are happy reading this and learn something, please remember to donate too.

Monday, April 21, 2014

Enable or disable sstable compression?

In cassandra 2.0.6, there are a few compression for sstables, the default is LZ4Compressor. There are others such as DeflateCompressor, SnappyCompressor or
do not compress the sstables at all.

You can read more about compression at official documentation as found it here.

With this blog, I will create two scenarios where first scenario is with enable compression and another scenario is without compression. This is the only different for both scenarios.

So I have create 50 thousands insert statement with cql and then insert using by feeding to cqlsh. So first , the schema below with LZ4Compressor compression and leave value for key sstable_compression empty for no compression.
CREATE TABLE users (
user_id text,
age int,
first text,
last text,
middle text,
PRIMARY KEY (user_id)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='storing user data' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};

CREATE INDEX idxAge ON users (age);

CREATE INDEX idxLast ON users (last);

jason@localhost:~$ wc -l data.cql
50000 data.cql
jason@localhost:~$ cqlsh 192.168.0.2 9160 -k jw_schema1 -f data.cql
jason@localhost:~$

so looks good, that we have total rows of 50 thousands.
cqlsh:jw_schema1> select count(*) from users limit 100000;

count
-------
50000

(1 rows)

cqlsh:jw_schema1>

Ran nodetool repair, flush, cleanup and then compact. With compression enable, the sstable count only 1 and the total filesize in this directory is about 4.5MB.
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ ls -l
total 4576
-rw-r--r-- 1 cassandra cassandra 179 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra 599421 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 136 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 1800 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4392 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 68 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:02 jw_schema1-users.idxAge-jb-1-TOC.txt
-rw-r--r-- 1 cassandra cassandra 179 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra 598579 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 16 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 680 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4392 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 71 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:02 jw_schema1-users.idxLast-jb-1-TOC.txt
-rw-r--r-- 1 cassandra cassandra 971 Apr 15 21:02 jw_schema1-users-jb-1-CompressionInfo.db
-rw-r--r-- 1 cassandra cassandra 2387391 Apr 15 21:02 jw_schema1-users-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 62512 Apr 15 21:02 jw_schema1-users-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 938894 Apr 15 21:02 jw_schema1-users-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4391 Apr 15 21:02 jw_schema1-users-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 6615 Apr 15 21:02 jw_schema1-users-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:02 jw_schema1-users-jb-1-TOC.txt
drwxr-xr-x 2 cassandra cassandra 4096 Apr 15 20:57 snapshots
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$

Right now without compression, the total file size is about 11MB. Noticed that, the size is almost double and the sstable count is two.
jason@localhost:/var/lib/cassandra/data/jw_schema1/users$ ls -l
total 10860
-rw-r--r-- 1 cassandra cassandra 48 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-CRC.db
-rw-r--r-- 1 cassandra cassandra 687656 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 78 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 136 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 1800 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4392 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 68 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:23 jw_schema1-users.idxAge-jb-1-TOC.txt
-rw-r--r-- 1 cassandra cassandra 32 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-CRC.db
-rw-r--r-- 1 cassandra cassandra 455238 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Data.db
-rw-r--r-- 1 cassandra cassandra 78 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 136 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Filter.db
-rw-r--r-- 1 cassandra cassandra 1800 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Index.db
-rw-r--r-- 1 cassandra cassandra 4393 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Statistics.db
-rw-r--r-- 1 cassandra cassandra 68 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:24 jw_schema1-users.idxAge-jb-2-TOC.txt
-rw-r--r-- 1 cassandra cassandra 48 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-CRC.db
-rw-r--r-- 1 cassandra cassandra 685677 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 16 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 425 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4392 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 71 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:23 jw_schema1-users.idxLast-jb-1-TOC.txt
-rw-r--r-- 1 cassandra cassandra 32 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-CRC.db
-rw-r--r-- 1 cassandra cassandra 453259 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Data.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 16 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Filter.db
-rw-r--r-- 1 cassandra cassandra 287 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Index.db
-rw-r--r-- 1 cassandra cassandra 4393 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Statistics.db
-rw-r--r-- 1 cassandra cassandra 71 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:24 jw_schema1-users.idxLast-jb-2-TOC.txt
-rw-r--r-- 1 cassandra cassandra 288 Apr 15 21:23 jw_schema1-users-jb-1-CRC.db
-rw-r--r-- 1 cassandra cassandra 4612770 Apr 15 21:23 jw_schema1-users-jb-1-Data.db
-rw-r--r-- 1 cassandra cassandra 71 Apr 15 21:23 jw_schema1-users-jb-1-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 37880 Apr 15 21:23 jw_schema1-users-jb-1-Filter.db
-rw-r--r-- 1 cassandra cassandra 564480 Apr 15 21:23 jw_schema1-users-jb-1-Index.db
-rw-r--r-- 1 cassandra cassandra 4391 Apr 15 21:23 jw_schema1-users-jb-1-Statistics.db
-rw-r--r-- 1 cassandra cassandra 3984 Apr 15 21:23 jw_schema1-users-jb-1-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:23 jw_schema1-users-jb-1-TOC.txt
-rw-r--r-- 1 cassandra cassandra 192 Apr 15 21:24 jw_schema1-users-jb-2-CRC.db
-rw-r--r-- 1 cassandra cassandra 3015018 Apr 15 21:24 jw_schema1-users-jb-2-Data.db
-rw-r--r-- 1 cassandra cassandra 71 Apr 15 21:24 jw_schema1-users-jb-2-Digest.sha1
-rw-r--r-- 1 cassandra cassandra 24648 Apr 15 21:24 jw_schema1-users-jb-2-Filter.db
-rw-r--r-- 1 cassandra cassandra 374414 Apr 15 21:24 jw_schema1-users-jb-2-Index.db
-rw-r--r-- 1 cassandra cassandra 4391 Apr 15 21:24 jw_schema1-users-jb-2-Statistics.db
-rw-r--r-- 1 cassandra cassandra 2672 Apr 15 21:24 jw_schema1-users-jb-2-Summary.db
-rw-r--r-- 1 cassandra cassandra 79 Apr 15 21:24 jw_schema1-users-jb-2-TOC.txt

With current hardware setup, which is loaded, with sstable compression enable, at times, the request get rpc timeout but at times, the result is returned. However without compression on sstable, all the requests executed get timeout. Below are the query perform via cqlsh.
cqlsh:jw_schema1> select * from users where age > 95 and last = 'smith' allow filtering;
Request did not complete within rpc_timeout.

Apparently enable compression does improve reading speed and saving disk size.

Saturday, April 19, 2014

Introduction to CRUD on cql 3.0 data type

In a previous article, we covered a basic data definition language,  and in this article, we are going to cover data manipulation language. With cql3, composite data type is pretty interesting compare to sql. Official documentation available here.

We covered all data type in cql 3.0 except counter and now we will create all the available data types that can coexists within a table. Let's do it.
CREATE TABLE dataType (
id uuid,
name ascii,
amount bigint,
binary blob,
isSingle boolean,
lamp decimal,
salary double,
works float,
ip inet,
car int,
email set<text>,
kidsAge map<text,int>,
places list<text>,
description text,
lastUpdate timestamp,
myTimeUUID timeuuid,
longDescription varchar,
spending varint,
PRIMARY KEY (id)
);

cqlsh:jw_schema1> desc table datatype;

CREATE TABLE datatype (
id uuid,
amount bigint,
binary blob,
car int,
description text,
email set<text>,
ip inet,
issingle boolean,
kidsage map<text, int>,
lamp decimal,
lastupdate timestamp,
longdescription text,
mytimeuuid timeuuid,
name ascii,
places list<text>,
salary double,
spending varint,
works float,
PRIMARY KEY (id)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};

looks good, table created and let's continue by insert data into this table!
insert into datatype (id, name, amount, binary, issingle, lamp, salary, works, ip, car, email, kidsage, places, description, lastUpdate, myTimeUUID, longDescription, spending) values (62c36092-82a1-3a00-93d1-46196ee77204, 'jason wee', 123, 0xff, false, 10, 100000, 1, '192.168.0.1', 3, {'a@b.com', 'c@d.com'}, {'juniorA':1, 'juniorB':2}, ['kuala lumpur', 'petaling jaya', 'kepong'], 'hello world', '2014-04-15 00:00:00', maxTimeuuid('2014-04-15 00:05+0000'), 'this is a longer hello world', 123);

cqlsh:jw_schema1> select * from datatype;

id | amount | binary | car | description | email | ip | issingle | kidsage | lamp | lastupdate | longdescription | mytimeuuid | name | places | salary | spending | works
--------------------------------------+--------+--------+-----+-------------+------------------------+-------------+----------+------------------------------+------+--------------------------+------------------------------+--------------------------------------+-----------+---------------------------------------------+--------+----------+-------
62c36092-82a1-3a00-93d1-46196ee77204 | 123 | 0xff | 3 | hello world | {'a@b.com', 'c@d.com'} | 192.168.0.1 | False | {'juniorA': 1, 'juniorB': 2} | 10 | 2014-04-15 00:00:00+0800 | this is a longer hello world | 95fc050f-c431-11e3-7f7f-7f7f7f7f7f7f | jason wee | ['kuala lumpur', 'petaling jaya', 'kepong'] | 1e+05 | 123 | 1

(1 rows)

Goodies, all data were inserted. Let's try update, we start by single update to one column.
cqlsh:jw_schema1> update datatype set amount = 456 where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select amount from datatype where id = 62c36092-82a1-3a00-93d1-46196ee77204;

amount
--------
456

(1 rows)

Looks good too! Now we will update three fields.
cqlsh:jw_schema1> update datatype set binary = 0x68656c6c6f20776f726c64, car = 6, description = 'changed description' where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select binary, car, description from datatype where id = 62c36092-82a1-3a00-93d1-46196ee77204;

binary | car | description
--------------------------+-----+---------------------
0x68656c6c6f20776f726c64 | 6 | changed description

(1 rows)

cqlsh:jw_schema1>

Looks good! The binary data type always prefix with 0x. It is hex representation of hello world. Let's now change the composite data type.
cqlsh:jw_schema1> update datatype set email = {'e@f.com'} where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select email from datatype;

email
-------------
{'e@f.com'}

(1 rows)

Hmm... email field values get overridden. So how do we append it? concat them ! :-)
cqlsh:jw_schema1> update datatype set email = email + {'a@b.com', 'c@d.com'} where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select email from datatype;

email
-----------------------------------
{'a@b.com', 'c@d.com', 'e@f.com'}

(1 rows)

Move on, update data type boolean and IP number.
cqlsh:jw_schema1> update datatype set ip = 'a.b.c.d', issingle = True where id = 62c36092-82a1-3a00-93d1-46196ee77204;
Bad Request: unable to make inetaddress from 'a.b.c.d'
cqlsh:jw_schema1> update datatype set ip = '255.255.255.255', issingle = True where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> update datatype set ip = '255.255.255.255', issingle = Trued where id = 62c36092-82a1-3a00-93d1-46196ee77204;
Bad Request: line 1:61 no viable alternative at input 'where'
cqlsh:jw_schema1> update datatype set ip = '255.255.255.255', issingle = True where id = 62c36092-82a1-3a00-93d1-46196ee77204;

cqlsh:jw_schema1> select ip,issingle from datatype;

ip | issingle
-----------------+----------
255.255.255.255 | True

(1 rows)

Simple checking on IP number and boolean data type is enforce. Let's change map now.
cqlsh:jw_schema1> update datatype set kidsage = {'juniorC':3} where  id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select kidsage from datatype;

kidsage
----------------
{'juniorC': 3}

(1 rows)

cqlsh:jw_schema1> update datatype set kidsage = kidsage + {'juniorA':1, 'juniorB':2} where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select kidsage from datatype;

kidsage
--------------------------------------------
{'juniorA': 1, 'juniorB': 2, 'juniorC': 3}

(1 rows)

Exactly like set behavior, if you need to append the data, you need to concat them using plus sign.
cqlsh:jw_schema1> update datatype set lamp = 12.34, lastupdate = '2014-04-16 20:00', longdescription = 'this is a long long long hello world', mytimeuuid = maxTimeuuid('2014-04-16'), name = 'john smith' where  id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select lamp, lastupdate, longdescription, mytimeuuid, name from datatype;

lamp | lastupdate | longdescription | mytimeuuid | name
-------+--------------------------+--------------------------------------+--------------------------------------+------------
12.34 | 2014-04-16 20:00:00+0800 | this is a long long long hello world | ff72270f-c4b6-11e3-7f7f-7f7f7f7f7f7f | john smith

(1 rows)

cqlsh:jw_schema1>

Everything looks good, timestamp provided by hour and longer text and time uuid can accept only date, everything seem cool.
cqlsh:jw_schema1> update datatype set places = places + ['cheras'], salary = 985621.35, spending = 12355, works = 89.36 where  id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select places, salary, spending, works from datatype;

places | salary | spending | works
-------------------------------------------------------+------------+----------+-------
['kuala lumpur', 'petaling jaya', 'kepong', 'cheras'] | 9.8562e+05 | 12355 | 89.36

(1 rows)

cqlsh:jw_schema1>

Okay, we pretty all cover update all the data type. Let's remove this one row.
cqlsh:jw_schema1> delete amount,binary,car,description,email,ip,issingle,kidsage,lamp,lastupdate,longdescription,mytimeuuid from datatype where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select * from datatype;

id | amount | binary | car | description | email | ip | issingle | kidsage | lamp | lastupdate | longdescription | mytimeuuid | name | places | salary | spending | works
--------------------------------------+--------+--------+------+-------------+-------+------+----------+---------+------+------------+-----------------+------------+------------+-------------------------------------------------------+------------+----------+-------
62c36092-82a1-3a00-93d1-46196ee77204 | null | null | null | null | null | null | null | null | null | null | null | null | john smith | ['kuala lumpur', 'petaling jaya', 'kepong', 'cheras'] | 9.8562e+05 | 12355 | 89.36

(1 rows)

Pretty interesting, we can delete columns within a row. The data is set to null.
cqlsh:jw_schema1> delete from datatype where id = 62c36092-82a1-3a00-93d1-46196ee77204;
cqlsh:jw_schema1> select * from datatype;

(0 rows)

Now we have deleted everything.

  • looks like it is case insensitive, that, is when we created the table name, dataType it is stored as datatype.

  • the composite datatype is definitely nice to have as we don't have to link a few tables.

  • we can also delete a few columns or we can delete the entire row.