Friday, May 22, 2015

learning elasticsearch percolator

Today, we are going to learn elasticsearch percolator. But first, what's a percolator? Excerpt from wikipedia,

A coffee percolator is a type of pot used to brew coffee by continually cycling the boiling or nearly boiling brew through the grounds using gravity until the required strength is reached.
But that's coffe's percolator but for elasticsearch's percolator,
The percolator allows to register queries against an index, and then send percolate requests which include a doc, and getting back the queries that match on that doc out of the set of registered queries. 
Think of it as the reverse operation of indexing and then searching. Instead of sending docs, indexing them, and then running queries. One sends queries, registers them, and then sends docs and finds out which queries match that doc.

If that sounds a little abstract, let's dip our hand into the water. Let's start doing experiement using elasticsearch percolator. Start by create an index.

1:  [user@localhost ~]$ curl -XPUT 'localhost:9200/test?pretty'  
2:  {  
3:   "ok" : true,  
4:   "acknowledged" : true  
5:  }  

Then we register a percolator query.
1:  [user@localhost ~]$ curl -XPUT 'localhost:9200/_percolator/test/kuku?pretty' -d '{ "query" : { "term" : { "field1" : "value1" } } }'  
2:  {  
3:   "ok" : true,  
4:   "_index" : "_percolator",  
5:   "_type" : "test",  
6:   "_id" : "kuku",  
7:   "_version" : 1  
8:  }  

Now we start to index, but we need to append _append to the url.

1:  [user@localhost ~]$ curl -XGET 'localhost:9200/test/type1/_percolate?pretty' -d '{ "doc" : { "field1" : "value1" } }'  
2:  {  
3:   "ok" : true,  
4:   "matches" : [ "kuku" ]  
5:  }  

So now we see a match "query" when we index a document 'field 1' equal to 'value 1'. Another way of index, see below, if you have multiple percolators to match with, you can use asterisk.

1:  [user@localhost ~]$ curl -XPUT 'localhost:9200/test/type1/1?percolate=*&pretty' -d ' { "field1" : "value1" }'  
2:  {  
3:   "ok" : true,  
4:   "_index" : "test",  
5:   "_type" : "type1",  
6:   "_id" : "1",  
7:   "_version" : 2,  
8:   "matches" : [ "kuku" ]  
9:  }  

So yes, another match! that's cool! But what if we index specify using the percolator color green?

1:  [user@localhost ~]$ curl -XPUT 'localhost:9200/test/type1/1?percolate=color:green&pretty' -d '{ "field1" : "value1", "field2" : "value2" }'  
2:  {  
3:   "ok" : true,  
4:   "_index" : "test",  
5:   "_type" : "type1",  
6:   "_id" : "1",  
7:   "_version" : 3,  
8:   "matches" : [ ]  
9:  }  

There is no match. Lets index entirely different content, to see if we match the percolator we setup before.

1:  [user@localhost ~]$ curl -XPUT 'localhost:9200/test/type1/1?percolate=*&pretty' -d '{ "field1" : "value33", "field2" : "value2" }'  
2:  {  
3:   "ok" : true,  
4:   "_index" : "test",  
5:   "_type" : "type1",  
6:   "_id" : "1",  
7:   "_version" : 7,  
8:   "matches" : [ ]  
9:  }  

So there is not match. This is pretty cool, you can pre-register a few percolator and when interesting match coming in (index), then if there is any match to the percolator, it will shown in the output.

Instead of query the index data, with percolator you can get the match query during indexing. Something very cool.

Sunday, May 10, 2015

My journey and experience on upgrading apache cassandra from version1.0.12 to 1.1.12

If you have read my previous post on apache cassandra upgrade, this is another journey to major upgrade apache cassandra from version 1.0 to 1.1. In this article, I will share on my experience on upgrading cassandra from version 1.0.12 to 1.1.12.

The sstable version used by cassandra 1.0.12 is hd  and you should ensure that all nodes sstables become hd before proceed upgrade to a newer version of cassandra.

First, let read some highlight of cassandra 1.1

  • api version 19.33.0

  • new file cassandra-rackdc.properties, commitlog_archiving.properties

  • new directory structure for sstable and filename change for sstable.

  • more features/improvement to nodetool such as compactionstats has remaining timestamp, calculate exact size required for cleanup operations, you can now stop compaction, rangekeysample, getsstables, repair print progress, etc.

  • global key and row cache.

  • cql 3.0 beta

  • schema change for cassandra in caching.

  • libthrift version 0.7.0.

  • sstable hf version.

  • default compressor become snappy compressor.

  • a lot of improvement to level compression strategy.

  • sliced_buffer_size_in_kb option has been removed from the cassandra.yaml configuration file (this option was a no-op since 1.0).

  • thread stack size increased to 160k

  • added flag UseTLAB for jvm to improve read speed.

As this is a newer version of cassandra compare the previous, it is always good to setup a test node and so you can play around and get familiar with it before actually doing the upgrade. With this new node, you can also quickly test with your application client which write and/or read to the test cassandra node. It is also recommended to do some load test to see the result is what you have expected.

If you want to be extremely careful on the upgrade, then reading the code changes between the version you chose to upgrade is always recommended. This is the link for this upgrade  and I know and understand as there are huge differences in betweeen them, so you should split as small as possible to read through it. You can learn a lot from the experience coder if you spend a lot of time reading their code and you can learn new technology too. It is a daunting huge tasks but if you willing to spend sometime to read them, the benefits return is just too much to even describe here.

If you upgrade from 1.0.12 to 1.1.12, cassandra 1.1 is smart enough to move the sstable into new directory structure. So, it ease your job that you do not need to move the sstable into the new directory structure. When the new cassandra 1.1.12 starting up, it will move for you.

So you might want to consider prepare the configuration file for your cluster environment before hand. For example, cassandra.yaml, cassandra-env.sh and cassandra.in.sh. By doing this, you can decrease the upgrade process time duration and less error when you are not actually doing it but a upgrade script will symlink this for you. So spend sometime to write upgrade and downgrade scripts for the production cluster and tests it.

Because upgrade process will take time (a long one, depend on how many nodes you have in cluster) and it will tired you in the process (remember, there will be post upgrade issues which you need to deal with), so I suggest you create a upgrade script to handle the upgrade process. The cassandra configuration which you prepare before will be automatically symlink within this script. When you do this, you reduce risk such as factor human error and for a production cluster, you will NOT want to risk anything or cut the risk to as minimum as possible.

There is official upgrade documentation here at datastax but because your cluster environment might be different, so you might want to write the upgrade step taking into consideration from the official documentation and let peer review so you cover as much as possible. Best if your peer will tests and raise in some questions which you might not think of.

If you have using monitoring system such as opscenter, spm, jconsole, or your own monitoring system, you wanna check it out if these monitoring can support the newer version of cassandra.

key cache and row cache per column family based has been replace with global key cache and global row cache respectively. These global cache settings can be found in casandra.yaml file. If you leave it to default, 1 millon key cache by default. Below are some new parameter for cassandra 1.1,

  • populate_io_cache_on_flush

  • key_cache_size_in_mb

  • key_cache_save_period

  • row_cache_size_in_mb

  • row_cache_save_period

  • row_cache_provider

  • commitlog_segment_size_in_mb

  • trickle_fsync

  • trickle_fsync_interval_in_kb

  • internode_authenticator

and below are configuration get removed

  • sliced_buffer_size_in_kb

  • thrift_max_message_length_in_mb

For the upgrade steps in production, these steps are taken appropriately:

pre-upgrade apply to all node in cluster.
* stop any repair , cleanup in all cassandra node and no streaming happened. Streaming are the nodes bootstrap or you rebuild a node.

upgrade steps.
1. download cassandra 1.1.12 and verify binary is not corrupted.
2. extract the compressed tarball.
3. nodetool snapshot.
4. nodetool drain.
5. stop cassandra if it not stopped.
6. symlink new configuration files.
7. start cassandra 1.1.12
8. monitor cassandra system.log
9. check monitoring system.

If everything looks okay for first node, best if you do two nodes, and then continue till the rest of the node in rolling upgrade fashion. After you migrate, you might also noticed there are 3 more additional column families in cassandra 1.1

cassandra 1.0 system keyspace has a total of 7 column families

  • HintsColumnFamily

  • IndexInfo

  • LocationInfo

  • Migrations

  • NodeIdInfo

  • Schema

  • Versions

cassandra 1.1 system keyspace has a total 10 column families.

  • HintsColumnFamily

  • IndexInfo

  • LocationInfo

  • Migrations

  • NodeIdInfo

  • Schema

  • schema_columnfamilies

  • schema_columns

  • schema_keyspaces

  • Versions

If you are using level compaction strategy, these sstable need to be scrub accordingly. There are nodetool scrub and offline sstablescrub for this job. If you have defined column family using counter type, you should upgrade the sstable using nodetool upgradesstables.

That's it and if you need professional service for this, please contact me and I will be gladly to provide professional advice and/or service.

Saturday, May 9, 2015

Light walkthrough on Java Execution Time Measurement Library (JETM)

Today, let's learn a java library, Java Execution Time Measurement Library or JETM. What is JETM?

From the official site
A small and free library, that helps locating performance problems in existing Java applications.

 

JETM enables developers to track down performance issues on demand, either programmatic or declarative with minimal impact on application performance, even in production.

jetm is pretty cool and has a lot of features.

You can follow the tutorial trail here. The following codes are taken from one of the tutorial with minor modification.
public class BusinessService {

private static final EtmMonitor etmMonitor = EtmManager.getEtmMonitor();

public void someMethod() {
EtmPoint point = etmMonitor.createPoint("BusinessService:someMethod");

try {
Thread.sleep((long)(10d * Math.random()));
nestedMethod();
} catch (InterruptedException e ) {

} finally {
point.collect();
}
}

public void nestedMethod() {
EtmPoint point = etmMonitor.createPoint("BusinessService:nestedMethod");

try {
Thread.sleep((long)(15d * Math.random()));
} catch (InterruptedException e) {

} finally {
point.collect();
}

}

public static void main(String[] args) {
BasicEtmConfigurator.configure(true);
//etmMonitor = EtmManager.getEtmMonitor();
etmMonitor.start();
BusinessService bizz = new BusinessService();
bizz.someMethod();
bizz.someMethod();
bizz.someMethod();
bizz.someMethod();
bizz.nestedMethod();
etmMonitor.render(new SimpleTextRenderer());

etmMonitor.stop();
}

}

Hit the run button in eclipse.
EtmMonitor info [INFO] JETM 1.2.3 started.
|--------------------------------|---|---------|-------|--------|--------|
| Measurement Point | # | Average | Min | Max | Total |
|--------------------------------|---|---------|-------|--------|--------|
| BusinessService:nestedMethod | 1 | 4.121 | 4.121 | 4.121 | 4.121 |
|--------------------------------|---|---------|-------|--------|--------|
| BusinessService:someMethod | 4 | 12.611 | 6.196 | 16.347 | 50.442 |
| BusinessService:nestedMethod | 4 | 5.381 | 0.017 | 10.194 | 21.523 |
|--------------------------------|---|---------|-------|--------|--------|
EtmMonitor info [INFO] Shutting down JETM.

So we saw that nestedMethod execute once and four time for someMethod. The result showing a minimum and maximum for the execution with an avarage. Last column shown the total. Pretty neat for a small java library.

 

Friday, May 8, 2015

Elasticsearch no node exception happened in tomcat web container

If you ever get the stack trace in web container log file such as below and wondering how to solve these. Then read on but first, a little background. A elasticsearch cluster 0.90 and client running on tomcat web container using elasticsearch java transport client. Both server and client running same elasticsearch version and same java version.
16.Feb 6:21:30,830 ERROR WebAppTransportClient [put]: error
org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:212)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:84)
at org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:316)
at org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:324)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at com.example.elasticsearch.WebAppTransportClient.put(WebAppTransportClient.java:258)
at com.example.elasticsearch.WebAppTransportClient.put(WebAppTransportClient.java:307)
at com.example.threadpool.TaskThread.run(TaskThread.java:38)
at java.lang.Thread.run(Thread.java:662)

This exception will disappear once web container is restarted but restarting webapp that often is not a good solution in production. I did a few research on line and gather a few information, they are as following:

* The default number of channels in each of these class are configured with the configuration prefix of transport.connections_per_node.
https://www.found.no/foundation/elasticsearch-networking/

* If you see NoNodeAvailableException you may have hit a connect timeout of the client. Connect timeout is 30 secs IIRC.
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/VyNpCs17aTA/CcXkYvVMYWAJ

* You can set org.elasticsearch.client.transport to TRACE level in your logging configuration (on the client side) to see the failures it has (to connect for example). For more information, you can turn on logging on org.elasticsearch.client.transport.
https://groups.google.com/forum/#!topic/elasticsearch/Mt2x4d5BCGI

* This means that you started to get disconnections between the client (transport) and the server. It will try and reconnect automatically, and possibly manages to do it. For more information, you can turn on logging on org.elasticsearch.client.transport.
* Can you try and increase the timeout and see how it goes? Set client.transport.ping_timeout in the settings you pass to the TransportClient to 10s for example.
* We had the same problem. reason: The application server uses a older version of log4j than ES needed.
http://elasticsearch-users.115913.n3.nabble.com/No-node-available-Exception-td3920119.html

* The correct method is to add the known host addresses with addTransportAddresses() and afterwards check the connectedNodes() method. If it returns empty list, no nodes could be found.
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/ceH3UIy14jM/XJSFKd8kAXEJ

* the most common case for NoNodeAvailable is the regular pinging that the transport client does fails to do it, so no nodes end up as the list of nodes that the transport client uses. If you will set client.transport (or org.elasticsearch.client.transport if running embedded) to TRACE, you will see the pinging effort and if it failed or not (and the reason for the failures). This might get us further into trying to understand why it happens.
* .put("client.transport.ping_timeout", pingTimeout)
* .put("client.transport.nodes_sampler_interval", pingSamplerInterval).build();
https://groups.google.com/forum/#!msg/elasticsearch/9aSkB0AVrHU/_4kDkjAFKuYJ

* this has nothing to do with migration errors. Your JVM performs a very long GC of 9 seconds which exceeds the default ping timeout of 5 seconds, so ES dropped the connection ,assuming your JVM is just too busy. Try again if you can reproduce it. If yes, increase the timeout to something like 10 seconds, or consider to update your Java version.
http://elasticsearch-users.115913.n3.nabble.com/Migration-errors-0-20-1-to-0-90-td4035165.html

* During long GC the JVM is somehow suspended. So your client can not see it anymore.
http://grokbase.com/t/gg/elasticsearch/136fw0hppp/transport-client-ping-timeout-no-node-available-exception

* You wrote that you have a 0.90.9 cluster but you added 0.90.0 jars to the client. Is that correct?
* Please check:
*
* if your cluster nodes and client node is using exactly the same JVM
* if your cluster and client use exactly the same ES version
* if your cluster and client use the same cluster name
* reasons outside ES: IP blocking, network reachability, network interfaces, IPv4/IPv6 etc.
* Then you should be able to connect with TransportClient.

https://groups.google.com/forum/#!msg/elasticsearch/fYmKjGywe8o/z9Ci5L5WjUAJ

So I have tried all that option mentioned and the problem solve by added sniff to the transport client setting. 08988For more information, read here.

I hope this will solve your problem too.

Sunday, April 26, 2015

Benchmarking unigine heaven on debian

In this article, we are trying something different, a good buddy of mine asking me to do graphic benchmark on linux system. So let's roll. Start by, downloading the benchmark application here at https://unigine.com/products/heaven/download/

This benchmark application size about 290MB, so while waiting for the download to complete, you should probably check if your graphic card has 3d driver installed and enabled. You can check by running in the terminal with the command glxgears, see screenshot below.

screenshot_glxgears

So make sure the gearing windows pop up or you will need to solve the problem shown in the terminal if it is not. Once the benchmark application is downloaded, you need to unpack and run it. See below.
user@localhost:~$ sh Unigine_Heaven-4.0-1.run 
Creating directory Unigine_Heaven-4.0
Verifying archive integrity... All good.
Uncompressing Unigine Heaven Benchmark.............................................................................
Unigine Heaven Benchmark installation is completed. Launch heaven to run it
user@localhost:~$ cd Unigine_Heaven-4.0
user@localhost:~/Unigine_Heaven-4.0$ ls
total 16K
-rwxr-xr-x 1 jason jason 278 Feb 13 2013 heaven
drwxr-xr-x 4 jason jason 4.0K Feb 13 2013 bin
drwxr-xr-x 2 jason jason 4.0K Feb 13 2013 documentation
drwxr-xr-x 3 jason jason 4.0K Feb 13 2013 data
user@localhost:~/Unigine_Heaven-4.0$ ./heaven
Loading "/home/user/Unigine_Heaven-4.0/bin/../data/heaven_4.0.cfg"...
Loading "libGPUMonitor_x64.so"...
Loading "libGL.so.1"...
Loading "libopenal.so.1"...
Set 1920x1080 fullscreen video mode
Set 1.00 gamma value
Unigine engine http://unigine.com/
Binary: Linux 64bit GCC 4.4.5 Release Feb 13 2013 r11274
Features: OpenGL OpenAL XPad360 Joystick Flash Editor
App path: /home/user/Unigine_Heaven-4.0/bin/
Data path: /home/user/Unigine_Heaven-4.0/data/
Save path: /home/user/.Heaven/

---- System ----
System: Linux 3.9-1-amd64 x86_64
CPU: Intel(R) Core(TM) i3 CPU 380 @ 2.53GHz 2526MHz MMX SSE SSE2 SSE3 SSSE3 SSE41 SSE42 HTT x4
GPU: Unknown GPU x1
System memory: 7869 MB
Video memory: 256 MB
Sync threads: 3
Async threads: 4

---- MathLib ----
Set SSE2 simd processor

---- Sound ----
Renderer: OpenAL Soft
OpenAL vendor: OpenAL Community
OpenAL renderer: OpenAL Soft
OpenAL version: 1.1 ALSOFT 1.15.1
Found AL_EXT_LINEAR_DISTANCE
Found AL_EXT_OFFSET
Found ALC_EXT_EFX
Found EFX Filter
Found EFX Reverb
Found EAX Reverb
Found QUAD16 format
Found 51CHN16 format
Found 61CHN16 format
Found 71CHN16 format
Maximum sources: 256
Maximum effect slots: 4
Maximum auxiliary sends: 2

---- Render ----
GLRender::GLRender(): Unknown GPU
OpenGL vendor: X.Org
OpenGL renderer: Gallium 0.4 on AMD REDWOOD
OpenGL version: 3.2 (Core Profile) Mesa 10.2.8
OpenGL flags: Core Profile
Found required GL_ARB_map_buffer_range
Found required GL_ARB_vertex_array_object
Found required GL_ARB_draw_instanced
Found required GL_ARB_draw_elements_base_vertex
Found required GL_ARB_transform_feedback
Found required GL_ARB_half_float_vertex
Found required GL_ARB_half_float_pixel
Found required GL_ARB_framebuffer_object
Found required GL_ARB_texture_multisample
Found required GL_ARB_uniform_buffer_object
Found required GL_ARB_geometry_shader4
Found optional GL_EXT_texture_compression_s3tc
Found optional GL_ARB_texture_compression_rgtc
Shading language: 3.30
Maximum texture size: 16384
Maximum texture units: 48
Maximum texture renders: 8

---- Physics ----
Physics: Multi-threaded

---- PathFind ----
PathFind: Multi-threaded

GPUMonitorPlugin::init(): can't initialize GPUMonitor
EnginePlugins::init(): can't initialize "GPUMonitor" plugin
---- Interpreter ----
Version: 2.52

Loading "heaven/unigine.cpp" 60ms
Unigine~# render_restart
Loading "heaven/locale/unigine.en" dictionary
Loading "core/materials/default/unigine_post.mat" 23 materials 50 shaders 34ms
Loading "core/materials/default/unigine_render.mat" 47 materials 2368 shaders 17ms
Loading "core/materials/default/unigine_mesh.mat" 5 materials 3386 shaders 15ms
Loading "core/materials/default/unigine_mesh_lut.mat" 2 materials 1062 shaders 4ms
Loading "core/materials/default/unigine_mesh_paint.mat" 2 materials 1158 shaders 8ms
Loading "core/materials/default/unigine_mesh_tessellation.mat" 5 materials 3332 shaders 15ms
Loading "core/materials/default/unigine_mesh_tessellation_paint.mat" 2 materials 2276 shaders 9ms
Loading "core/materials/default/unigine_mesh_triplanar.mat" 1 material 112 shaders 2ms
Loading "core/materials/default/unigine_mesh_overlap.mat" 1 material 300 shaders 4ms
Loading "core/materials/default/unigine_mesh_terrain.mat" 1 material 813 shaders 5ms
Loading "core/materials/default/unigine_mesh_layer.mat" 1 material 84 shaders 1ms
Loading "core/materials/default/unigine_mesh_noise.mat" 1 material 106 shaders 2ms
Loading "core/materials/default/unigine_mesh_stem.mat" 2 materials 2180 shaders 16ms
Loading "core/materials/default/unigine_mesh_wire.mat" 1 material 45 shaders 1ms
Loading "core/materials/default/unigine_terrain.mat" 1 material 1980 shaders 9ms
Loading "core/materials/default/unigine_grass.mat" 2 materials 474 shaders 5ms
Loading "core/materials/default/unigine_particles.mat" 1 material 109 shaders 2ms
Loading "core/materials/default/unigine_billboard.mat" 1 material 51 shaders 1ms
Loading "core/materials/default/unigine_billboards.mat" 2 materials 840 shaders 4ms
Loading "core/materials/default/unigine_volume.mat" 6 materials 53 shaders 5ms
Loading "core/materials/default/unigine_gui.mat" 1 material 82 shaders 0ms
Loading "core/materials/default/unigine_water.mat" 1 material 533 shaders 24ms
Loading "core/materials/default/unigine_sky.mat" 1 material 21 shaders 16ms
Loading "core/materials/default/unigine_decal.mat" 1 material 99 shaders 1ms
Loading "core/properties/unigine.prop" 2 properties 0ms
Unigine Heaven Benchmark 4.0 (4.0)Unigine~# world_load heaven/heaven
Loading "heaven/heaven.cpp" 152ms
Loading "heaven/materials/heaven_base.mat" 7 materials 10ms
Loading "heaven/materials/heaven_environment.mat" 13 materials 838ms
Loading "heaven/materials/heaven_ruins.mat" 27 materials 2101ms
Loading "heaven/materials/heaven_buildings.mat" 58 materials 2116ms
Loading "heaven/materials/heaven_props.mat" 10 materials 412ms
Loading "heaven/materials/heaven_sfx.mat" 11 materials 8ms
Loading "heaven/materials/heaven_fort.mat" 15 materials 544ms
Loading "heaven/materials/heaven_airship.mat" 26 materials 5176ms
Loading "heaven/heaven.world" 13817ms
Unigine~# render_restart
Unigine~# video_grab
Saving /home/user/.Heaven/screenshots/00000.tga
Unigine~# video_grab
Saving /home/user/.Heaven/screenshots/00001.tga
Unigine~# video_grab
Saving /home/user/.Heaven/screenshots/00002.tga
Unigine~# render_restart
Benchmark running
Benchmark results:
Time: 261.689
Frames: 1286
FPS: 4.91422
Min FPS: 3.69493
Max FPS: 13.4421
Score: 123.789
Unigine~# quit
Close "libopenal.so.1"
Close "libGL.so.1"
Memory usage: none
Allocations: none
Shutdown
user@localhost:~/Unigine_Heaven-4.0$

This benchmark was performed on an OLD system, hence, score was very low. Otherwise, it ran fine, screenshots were taken and since benchmark is available for linux, I'm pretty sure more games will develop on linux. Come onbard on linux for better gaming experience. :) Enjoy the screenshots below.

unigine_heaven_benchmark_screenshot_0 unigine_heaven_benchmark_screenshot_1

Saturday, April 25, 2015

My way of solving tomcat memory leaking issue

Recently, I did a mistake by accidentally commit a stupid static codes into a static method into production causing heap usage grow tremendously. Since the static method stay persisted with the object, tomcat has to restart often to free up the heap that get hold. So today, I will share my experience on how I solve it and I hope it will give you a way on how to solve this difficult problem.
First is the to end, I will summarize the sequence you need to investigate and find out the fix.

* CHECK YOUR CODE.
* learn on how to find the memory leak using google.
* one step at a time to trace until you successfully pin down the problem and fix it.

As you can read, only three general steps but for each step, I will talk more about it.
CHECK YOUR CODE.

Always check your code by reading and tests! Best if you have someone experience and you can probably send your code for inspection. Remember, 4 eyes ball and 2 brains are better than 2 eyes ball and a brain. If you are using opensource project, most probably, the library are well tested and you should just spend time to investigate your codes. It's difficult especially for new programmer, but that should not stopped you to find out the problem. If you still cannot find out the problem, then you should start to search on search engine on how people solve it.
learn on how to find the memory leak using google.
Nobody is perfect and know everything, but if you are unsure, always google away. Google keyword such as java memory leak, tomcat memory leak or even best java coding practice. Pay attention on the first 10 links return by google and then read on blogging or even stackoverflow, it will give you knowledge that you never know of. Example of tools needed include jstat, jmap, jhat, and visualvm that can give you an idea what or even where might be the problem from. Remember, reading this material is a way of growing and it take times, so please be patience at this step and make sure u spend adequate amount of time and jot down important points mentioned and so you can use it on final step.

one step at a time to trace until you successfully pin down the problem and fix it.
Final step would probably repeating step 1 and step 2 slowly to determine the root cause. If you are using versoning system, you should really find out when was the last best working codes and start to check file by file where the problem was introduced. This is a TEDIOUS and DAUNTING process but this is effective to solving the root cause.
These steps were used by myself during determine the tomcat web application memory problem. Thank you and I hope you can benefit too.

Friday, April 24, 2015

Learning java jstat

Today, we will going to learn a java tool, which is incredibly useful if you are frequent coding for java application. This java tool is a monitoring tool known as jstat and it came with jdk. So you would ask why would I need to use jstat, my app run just fine. So for a simple java application, yes, you do not need to this monitoring tool. However if you have a long running application or big java codebase application, and sometime when your java application run midway hang (pause/freeze), then you should start to look into this tool really. In this article, I'm going to show you how I use it.

But first, let understand on what is jstat.
The jstat tool displays performance statistics for an instrumented HotSpot Java virtual machine (JVM).

As you aware, object that you wrote in the code will eventually get free from heap when it is not reference. If you has a lot of objects and heap usage grow, then you can use this monitoring tool to check out wassup of the heap allocation. Okay now, let's read into the command input.
jstat [ generalOption | outputOptions vmid [interval[s|ms] [count]] ]

so pretty simple, the commands jstat followed by a few parameters. The parameters can be explain below. You can find official documentation here.

generalOption
A single general command-line option (-help or -options)

outputOptions
One or more output options, consisting of a single statOption, plus any of the -t, -h, and -J options.

vmid
Virtual machine identifier, a string indicating the target Java virtual machine (JVM). The general syntax is
[protocol:][//]lvmid[@hostname[:port]/servername]
The syntax of the vmid string largely corresponds to the syntax of a URI. The vmid can vary from a simple integer representing a local JVM to a more complex construction
specifying a communications protocol, port number, and other implementation-specific values. See Virtual Machine Identifier for details.

interval[s|ms]
Sampling interval in the specified units, seconds (s) or milliseconds (ms). Default units are milliseconds. Must be a positive integer. If specified, jstat will produce its
output at each interval.

count
Number of samples to display. Default value is infinity; that is, jstat displays statistics until the target JVM terminates or the jstat command is terminated. Must be a
positive integer.

It should be very clear to you if you are season java coder and if you don't, take a look at an example below.
[iser@localhost ~]$ jstat -gcutil 12345 1s
S0 S1 E O P YGC YGCT FGC FGCT GCT
10.08 0.00 70.70 69.22 59.49 122328 4380.327 355 43.146 4423.474
10.08 0.00 84.99 69.22 59.49 122328 4380.327 355 43.146 4423.474
0.00 15.62 0.00 69.24 59.49 122329 4380.351 355 43.146 4423.497

so jstat is instrument a local jvm with process id 12345 with an interval of 1 second and loop infinitely. There are different type of statistics can be shown and with the above example given, it show summary of garbage collection statistics. If you want to shown different types of gc statistics, you can use the command jstat -options and below is the table of summaries what these options display means.
Option 	                Displays...
class Statistics on the behavior of the class loader.
compiler Statistics of the behavior of the HotSpot Just-in-Time compiler.
gc Statistics of the behavior of the garbage collected heap.
gccapacity Statistics of the capacities of the generations and their corresponding spaces.
gccause Summary of garbage collection statistics (same as -gcutil), with the cause of the last and current (if applicable) garbage collection events.
gcnew Statistics of the behavior of the new generation.
gcnewcapacity Statistics of the sizes of the new generations and its corresponding spaces.
gcold Statistics of the behavior of the old and permanent generations.
gcoldcapacity Statistics of the sizes of the old generation.
gcpermcapacity Statistics of the sizes of the permanent generation.
gcutil Summary of garbage collection statistics.
printcompilation HotSpot compilation method statistics.

Out of all these options, probably the most frequently you will use is gcutil, gc and gccapacity. We will look at them with example. Please note that in order to protect the privacy of the user, there are some information is removed but what need to be presented in this article shall remained as is.

option gcutil

jstat-gcutil

As can be read above, the command jstat with option gcutil on a java process id 23483. The statistics are generated with an interval at 1 second. It has 10 columns and these column can be explain below.
Column 	Description
S0 Survivor space 0 utilization as a percentage of the space's current capacity.
S1 Survivor space 1 utilization as a percentage of the space's current capacity.
E Eden space utilization as a percentage of the space's current capacity.
O Old space utilization as a percentage of the space's current capacity.
P Permanent space utilization as a percentage of the space's current capacity.
YGC Number of young generation GC events.
YGCT Young generation garbage collection time.
FGC Number of full GC events.
FGCT Full garbage collection time.
GCT Total garbage collection time.

First five columns depict space utilization in term of percentage. The next five depict amount of young generation collection and its time, full garbage collection and its time and last, total garbage collection time. With this screen capture, we see that the eden space is filling up quickly and promoted to either survivor space 0 or survivor space 1. At one instance, some object survived and eventually promoted to old space and increased the usage by 0.01% to 5.24%. Note that also YGC is increased by one as a result to 256. This young generation collection time took 13 milliseconds. Similar pattern happen again later and we see that, YGC is increased by oen to 257 with another 13 milliseconds of collection time. In this output, there is no change to full collection, which is good. It is only one full collection happened but with a pause of 94millseconds! You might want to keep an eye on the E column so it dont fill up quickly and adjust hte young gen in your java app accordingly. But for a long term solution, you might want to spend some time to find out which code take a lot of resources and improve it.

option gc

jstat-gcAs can be read above, the command jstat with option gc on a java process id 28276. The statistics are generated with an interval at 1 second. It has 15 columns and these column can be explain below.
Column 	Description
S0C Current survivor space 0 capacity (KB).
S1C Current survivor space 1 capacity (KB).
S0U Survivor space 0 utilization (KB).
S1U Survivor space 1 utilization (KB).
EC Current eden space capacity (KB).
EU Eden space utilization (KB).
OC Current old space capacity (KB).
OU Old space utilization (KB).
PC Current permanent space capacity (KB).
PU Permanent space utilization (KB).
YGC Number of young generation GC Events.
YGCT Young generation garbage collection time.
FGC Number of full GC events.
FGCT Full garbage collection time.
GCT Total garbage collection time.

The statistics shown the capacity in term of kilobytes. First ten columns are pretty easy, the space capacity and its current utilization. The last five columns are the same as gcutil last five columns. Notice that when the column EU value near to the column EC value, young generation collection happened. Object promoted to survivor spaces. Notice that column OU grow gradually. This statistics almost the same with gcutil except that the statistics shown here display in term of bytes whereas gcutil statistics display in term of percentage.

option gccapacity

jstat-gccapacity

As can be read above, the command jstat with option gccapacity on a java process id 13080. The statistics are generated with an interval at 1 second. It has 16 columns and these column can be explain below.
Column 	Description
NGCMN Minimum new generation capacity (KB).
NGCMX Maximum new generation capacity (KB).
NGC Current new generation capacity (KB).
S0C Current survivor space 0 capacity (KB).
S1C Current survivor space 1 capacity (KB).
EC Current eden space capacity (KB).
OGCMN Minimum old generation capacity (KB).
OGCMX Maximum old generation capacity (KB).
OGC Current old generation capacity (KB).
OC Current old space capacity (KB).
PGCMN Minimum permanent generation capacity (KB).
PGCMX Maximum Permanent generation capacity (KB).
PGC Current Permanent generation capacity (KB).
PC Current Permanent space capacity (KB).
YGC Number of Young generation GC Events.
FGC Number of Full GC Events.

These output is similar to the output of option gc but with minimum and maximum for the individual java heap.

That's it for this article and I will leave three links for your references.

http://www.cubrid.org/blog/dev-platform/how-to-monitor-java-garbage-collection/
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jstat.html
http://oracle-base.com/articles/misc/monitoring-java-garbage-collection-using-jstat.php