Showing posts with label cassandra2.0. Show all posts
Showing posts with label cassandra2.0. Show all posts

Tuesday, January 21, 2014

cassandra component on heap or off heap.

A summary of the components in cassandra that store offheap.

cassandra 1.2 and later use offheap for
bloom filter
compression offset map
row cache

cassandra 2.0 and later use offheap for
bloom filter
compression offset map
partition summary
index samples
row cache

Remember , off heap is not equal as non heap. Off-heap means the object is stored non in java heap allocated to the jvm, that is the native memory.

For example in the Bloom filter.
public static IAllocator newOffHeapAllocator(String offheap_allocator) throws ConfigurationException
{
if (!offheap_allocator.contains("."))
offheap_allocator = "org.apache.cassandra.io.util." + offheap_allocator;
return FBUtilities.construct(offheap_allocator, "off-heap allocator");
}

As you can see, if you want to determine if the object is stored on heap or offheap, you can to read into the code. I tried to inspect cassandra 2.0.2 using jconsole, it only show total usage of onheap or offheap. The cassandra community suggest it is better to read NEWS.txt or their blog to read the latest change if the component move on or off heap.

http://www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/operations/ops_tune_jvm_c.html
https://groups.google.com/forum/#!topic/nosql-databases/kFHS6QfZDuE
http://www.yourkit.com/docs/kb/sizes.jsp

Wednesday, November 13, 2013

cassandra 2.0 catch 101 – part2

After playing playing around cassandra 2.0 for quite sometime and in this article, I'm gonna share with you a strange issue that encountered, unable to drop table no matter how.

I'm using the stress tools in cassandra package to create the table column family. It seem that the keyspaces and table created successfully. Following are the output.


Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate, interval_key_rate,latency,95th,99.9th,elapsed_time
..
..
..


So everything seem to created okay in cassandra.


cqlsh:system> desc keyspaces;

jw_schema1 system system_traces

cqlsh:system> use jw_schema1;
cqlsh:jw_schema1> desc tables;

Counter1 Counter3 Standard1 Super1 SuperCounter1

cqlsh:jw_schema1> desc table Counter1;

CREATE TABLE "Counter1" (
key blob,
column1 ascii,
value counter,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='NONE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={};

cqlsh:jw_schema1>




when selecting or dropping table in any tables within the keyspaces, things started to become wrong and cassandra server debug log show nothing wrong.
cqlsh:jw_schema1> select * from Counter1;
Bad Request: unconfigured columnfamily counter1
cqlsh:jw_schema1>

DEBUG [Thrift:105] 2013-11-13 20:55:29,050 CassandraServer.java (line 1932) execute_cql3_query
DEBUG [Thrift:105] 2013-11-13 20:55:29,050 Tracing.java (line 159) request complete

cqlsh:jw_schema1> drop table Counter1;
Bad Request: Cannot drop non existing column family 'counter1' in keyspace 'jw_schema1'.
cqlsh:jw_schema1>

DEBUG [Thrift:105] 2013-11-13 20:55:59,392 CassandraServer.java (line 1932) execute_cql3_query
DEBUG [Thrift:105] 2013-11-13 20:55:59,393 Tracing.java (line 159) request complete

and using the datastax java binary driver.
public void connect(String node) {
cluster = Cluster.builder().addContactPoint(node)
.addContactPoints("127.0.0.1")
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ConstantReconnectionPolicy(100L)).build();
session = cluster.connect("jw_schema1");

ExecutionInfo info = session.execute("DROP TABLE Counter1").getExecutionInfo();
}

 
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: Cannot drop non existing column family 'counter1' in keyspace 'jw_schema1'.
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at com.datastax.driver.core.Session.execute(Session.java:77)
at foo.bar.main.SimpleClient.connect(SimpleClient.java:38)
at foo.bar.main.SimpleClient.main(SimpleClient.java:69)
Caused by: com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException: Cannot drop non existing column family 'counter1' in keyspace 'jw_schema1'.
at com.datastax.driver.core.Responses$Error.asException(Responses.java:97)
at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:122)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:217)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:349)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:500)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:71)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

So I'm not sure what is gone wrong, but I'end up dropping the keyspace as a work around.
cqlsh:system> drop keyspace jw_schema1;
cqlsh:system>





work around
cqlsh:system> desc keyspaces;

TestKeyspace system system_traces

cqlsh:system> drop keyspace TestKeyspace;
Bad Request: Cannot drop non existing keyspace 'testkeyspace'.
cqlsh:system> drop keyspace "TestKeyspace";
cqlsh:system> desc keyspaces;

system system_traces

cqlsh:system>

 

Friday, October 18, 2013

How to generate murmur3 in cassandra2.0

Reading into this doc, got really curious on how the murmur3 hash value is generated.

So I dig at cassandra github, found this this class , it seem that, cassandra 2.0 generate the token for the primary key using this method hash3_x64_128. Below are the method to get it work.. just put this into any java class and see the token generated.

    

public static LongToken genToken(String rowKey) {
ByteBuffer key = ByteBufferUtil.bytes(rowKey);
long hash = MurmurHash.hash3_x64_128(key, key.position(), key.remaining(), 0)[0];
LongToken lk = new LongToken(normalize(hash));
return lk;
}

public static void main(String[] args) {
System.out.println(genToken("jim"));
}

private static long normalize(long v)
{
// We exclude the MINIMUM value; see getToken()
return v == Long.MIN_VALUE ? Long.MAX_VALUE : v;
}

 


with jim, it generated as 2680261686609811218. So that should be correct. Something extra, if you use nodetool to show the token ranges, e.g. nodetool -h localhost describering jw_schema1, you should get an idea with the token generated, the range on which nodes are responsible that hold the row data.

Tuesday, October 15, 2013

cassandra 2.0 catch 101 - part1 - correct cassandra Unsupported major.minor version 51.0

This is another series of my journey to cassandra 2.0, if you have not read the previous post, you should read here

As cassandra 2.0 required jdk 7.0 or later, if your system has jdk 6 running and configure, it is still possible to run cassandra 2.0 with jdk7, that is to make them co-exists. Download jdk and extract to the directory, e.g. /usr/lib/jvm. add JAVA_HOME=/usr/lib/jvm/jdk1.7.0_04/ to /etc/default/cassandra. This should export the variable JAVA_HOME to your environment so that jdk 7 is used to start the cassandra successfully.

Because the above setting is set to work for cassandra instance only, for the admin tools that come with it, we will set another environment for it. This make sure we don't break our existing work. If you want the environment to work only for yourself, then you should create a file in your home directory. $HOME/.cassandra.in.sh . Below are the content.
JAVA_HOME=/usr/lib/jvm/jdk1.7.0_04
. /usr/share/cassandra/cassandra.in.sh

This first line is the same as previously but the second line, we source the additional environment setting for the admin tools so that it can find the java classes. With that done, you should not get the error Unsupported major.minor version 51.0 below anymore!
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/cassandra/tools/NodeCmd : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.cassandra.tools.NodeCmd. Program will exit.

Saturday, October 12, 2013

cassandra 2.0 catch 101

This is my experience with cassandra 2.0 and cqlsh, reading through the datastax documentation [1] , sometime you need to play around with the new tool to understand with the nature of the usage. It could be a bug, it could be just some changes, nonetheless, it is my experience reading and experiencing in cassandra 2.0. Some bugs may be fix or change in the later cassandra release and thus render this article invalid.. thus this article is only meant for cassandra 2.0 and cqlsh 4.0.0 . I will add more in my journey to cassandra 2.0


when updating config durable_writes to the keyspace, got a few times error... like below.


cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : false};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : false};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'false'};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'false'};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 0};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 0};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 1};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 1};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : '1'};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : '1'};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : '0'};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : '0'};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'False'};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'False'};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'True'};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : 'True'};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : True};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : True};] reason: NullPointerException null
cqlsh:jw_schema1> alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : False};
Bad Request: Failed parsing statement: [alter keyspace jw_schema1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 3, durable_writes : False};] reason: NullPointerException null


just specified as one config in the alter keyspace command. It strange alter keyspace is not in the cqlsh help file...


cqlsh:jw_schema1> alter keyspace jw_schema1 with durable_writes = false;
cqlsh:jw_schema1> DESCRIBE KEYSPACE jw_schema1;

CREATE KEYSPACE jw_schema1 WITH replication = {
'class': 'SimpleStrategy',
'replication_factor': '3'
} AND durable_writes = 'false';


voila!

[1] http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html#cassandra/configuration/configStorage_r.html#reference_ds_itw_wkz_1k
[2] http://cassandra.apache.org/doc/cql3/CQL.html#alterKeyspaceStmt