Saturday, November 21, 2015

Java Garbage Collector

If you are a java developer, java garbage collection (gc) sometime pop up from time to time in javadoc, online article or online discussion. It is such a hot and tough topic because that is entirely different paradigm than what programmer usually do, that is coding. Java gc free heap for the object you created in class in the background. In the past, I also cover a few article which related to java gc and today I am thinking to go through several blogs/articles which I found online, learn the basic and share what I've learned  and hopefully for java programmer, java gc will become clearer.

When you start a java application, with the parameters that are assigned to the java, the operating system will reserved some memory for java application known as heap. The heap further divided into several regions collectively known as eden, survivor spaces, old gen and perm gens. In oracle java8 hotspot, perm gen has been removed, be sure to always check official documention on garbage collector for changes. Below are a few links for hotspot implementation for java gc.
Survivor spaces are divided into two, survivor 0 and survivor 1. Both eden and survivor spaces collectively known as Young generation or new generation whilst old gen also known as tenured generation. Garbage collections will happened on young generation and old generations. Below are two diagrams show the heap regions are divided.



While the concept of Garbage Collection is the same, the implementation is not and neither are the default settings or how to tune it. The well known jvm includes the oracle sun hotspot, oracle jrockit and ibm j9. You can find the other jvm lists here. Essentially garbage collection will perform on young generation and old generation to remove object on heap that has no valid reference.

common java parameters settings. For full list, issue the command java -X

-Xms initial java heap size
-Xmx maximum java heap size
-Xmn the size of the heap for the young generation

There are a few type of GC
- serial gc
- parallel gc
- parallel old gc
- cms gc 

You can specify what gc implementation to run on the java heap region.

If you run a server application, the metric exposed by gc is definitely to watch out for. In order to get the metric, you can use

That's it for this brief introduction.

Friday, November 20, 2015

Yet another hdd vs ssd comparison

There are many article online that articulate how fast a solid state drive in comparison to a spinning hard disk drive. But ones wonder really, just how fast is ssd. I mean if your earning is limited and the current spinning disk is working fine, there is no compelling reason to make the change and at the same time, it will be difficult to afford a ssd considering the cost per gb. As of this writing, a samsung 850 pro 512GB selling at compuzone malaysia at a staggering pricing of 1378MYR!!! But there is a kind soul who generously donate ssd to me and as a gesture of good will back to him, here in I will write a blog describe my experience migrating from a Hitachi/HGST Travelstar Z7K500 HGST HTS725050A7E630 to a Samsung SSD 850 PRO 512GB.

One things for sure, ssd weight is so light. It is only 66gram! In comparison with hdd at 95gram. For the first timer, ssd weight is so light that it confused me like did it actually contain the size of 512GB.

For the spinning hdd, I took 5 samples, 1min 41sec, 1min 34sec, 1min 34sec, 1min 34sec and 1min 31sec. All measurements from the moment I push the power button until I see the login screen. Of cause, there many services running during bootup. On average, the bootup time is around  1min 48sec.

Next, I benchmark the current hdd with hdparm and dd.

 user@localhost:~$ sudo hdparm -t /dev/sda5  
   
 /dev/sda5:  
  Timing buffered disk reads: 322 MB in 3.01 seconds = 107.12 MB/sec  
 user@localhost:~$ sudo hdparm -t /dev/sda5  
   
 /dev/sda5:  
  Timing buffered disk reads: 322 MB in 3.01 seconds = 107.13 MB/sec  
 user@localhost:~$ sudo hdparm -t /dev/sda5  
   
 /dev/sda5:  
  Timing buffered disk reads: 322 MB in 3.01 seconds = 107.12 MB/sec  
   
   
 user@localhost:~$ sudo hdparm -T /dev/sda5  
   
 /dev/sda5:  
  Timing cached reads:  5778 MB in 2.00 seconds = 2889.25 MB/sec  
 user@localhost:~$ sudo hdparm -T /dev/sda5  
   
 /dev/sda5:  
  Timing cached reads:  5726 MB in 2.00 seconds = 2863.32 MB/sec  
 user@localhost:~$ sudo hdparm -T /dev/sda5  
   
 /dev/sda5:  
  Timing cached reads:  5702 MB in 2.00 seconds = 2850.84 MB/sec  

I have performed 3 tests with cache and without cache reading. On average, non caching read speed is about 107.12MB/sec and caching read speed is 2867.80MB/sec . The parameter for hdparm is shown below.
       -t     Perform timings of device reads for benchmark and comparison purposes.  For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system
(no  other active processes) with at least a couple of megabytes of free memory.  This displays the speed of reading through the buffer cache to the disk without any prior caching of data.  This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead.  To ensure  accurate measurements, the buffer cache is flushed during the processing of -t using the BLKFLSBUF ioctl.
       -T     Perform  timings of cache reads for benchmark and comparison purposes.  For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system
 (no other active processes) with at least a couple of megabytes of free memory.  This displays the speed of reading directly from  the  Linux  buffer  cache  without  disk access.  This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

Next, I use command dd to do higher layer benchmark on the disk. See below.

 user@localhost:~$ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile  
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 32.5028 s, 63.0 MB/s  
   
 real     0m41.773s  
 user     0m0.068s  
 sys     0m4.048s  
 user@localhost:~$ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile  
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 27.1676 s, 75.4 MB/s  
   
 real     0m37.012s  
 user     0m0.056s  
 sys     0m3.848s  
 user@localhost:~$ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile  
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 19.4599 s, 105 MB/s  
   
 real     0m37.929s  
 user     0m0.064s  
 sys     0m3.740s  

So the arithmetic, ( 8 x 1024 x 250000 / 2014 / 1024 ) / real so that give 46.75MB/sec, 52.77MB/sec and 51.49MB/sec respectively on all three tests and on average 50.33MB/sec.

Now to the samsung ssd.

 root@localhost:~# sudo hdparm -t /dev/sda6  
   
 /dev/sda6:  
  Timing buffered disk reads: 770 MB in 3.01 seconds = 256.00 MB/sec  
 root@localhost:~# sudo hdparm -t /dev/sda6  
   
 /dev/sda6:  
  Timing buffered disk reads: 762 MB in 3.00 seconds = 253.98 MB/sec  
 root@localhost:~# sudo hdparm -t /dev/sda6  
   
 /dev/sda6:  
  Timing buffered disk reads: 758 MB in 3.00 seconds = 252.44 MB/sec  
   
   
 root@localhost:~# sudo hdparm -T /dev/sda6   
   
 /dev/sda6:  
  Timing cached reads:  5820 MB in 2.00 seconds = 2910.31 MB/sec  
 root@localhost:~# sudo hdparm -T /dev/sda6   
   
 /dev/sda6:  
  Timing cached reads:  6022 MB in 2.00 seconds = 3011.33 MB/sec  
 root@localhost:~# sudo hdparm -T /dev/sda6   
   
 /dev/sda6:  
  Timing cached reads:  5698 MB in 2.00 seconds = 2849.14 MB/sec  
 root@localhost:~#   
   
   
 root@localhost:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile   
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 4.14268 s, 494 MB/s  
   
 real     0m8.280s  
 user     0m0.040s  
 sys     0m2.084s  
 root@localhost:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile   
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 4.18595 s, 489 MB/s  
   
 real     0m8.279s  
 user     0m0.068s  
 sys     0m2.036s  
 root@localhost:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm -f ddfile   
 250000+0 records in  
 250000+0 records out  
 2048000000 bytes (2.0 GB) copied, 3.94227 s, 519 MB/s  
   
 real     0m8.258s  
 user     0m0.080s  
 sys     0m2.060s  
   

On average, non caching read speed is 253.67MB/sec and caching read speed is 2926.93MB/sec. As for dd tests, on average, 236.10MB/sec. On average, the bootup time is around  30sec! The significant change is during after grub and the login shown is around 3 seconds or less. Although the rated for this ssd for sequantil read and write is 550/520MB/sec, maybe it was because of my old system bandwidth maxing out.

Significant time reduced during bootup and disk read is clearly seen from the statistics above. As for user experience, everything become so fast! To put into comparison, hdd to ssd is like proton car to F1 car. I think in the future, it will help in term of programming like code grepping and code find.


UPDATE: with adjustment to the grub timeout and changes to the POST to fast boot, now my cold boot to login screen is improved to 20seconds!

Sunday, November 8, 2015

Light learning into CouchDB

Today we will explore another opensource database, CouchDB. First, let's understand what is Apache CouchDB

Apache CouchDB, commonly referred to as CouchDB, is an open source database that focuses on ease of use and on being "a database that completely embraces the web".[1] It is a document-oriented NoSQL database that uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API.[1] CouchDB was first released in 2005 and later became an Apache project in 2008.

Actually couch is an acronym for Cluster Of Unreliable Commodity Hardware.  If you are using debian based linux distribution, it will be very easy, just apt-get install couchdb . If not, you can check out this link on how to install for other linux distribution. Once installed, make sure couchdb is running.

 root@localhost:~# /etc/init.d/couchdb status  
 ● couchdb.service - LSB: Apache CouchDB init script  
   Loaded: loaded (/etc/init.d/couchdb)  
   Active: active (exited) since Thu 2015-08-20 23:07:30 MYT; 14s ago  
    Docs: man:systemd-sysv-generator(8)  
   
 Aug 20 23:07:28 localhost systemd[1]: Starting LSB: Apache CouchDB init script...  
 Aug 20 23:07:28 localhost su[14399]: Successful su for couchdb by root  
 Aug 20 23:07:28 localhost su[14399]: + ??? root:couchdb  
 Aug 20 23:07:28 localhost su[14399]: pam_unix(su:session): session opened for user couchdb by (uid=0)  
 Aug 20 23:07:30 localhost couchdb[14392]: Starting database server: couchdb.  
 Aug 20 23:07:30 localhost systemd[1]: Started LSB: Apache CouchDB init script.  
   

very easy, you can quickly check the version of couchdb that is running. just put the following link into the browser.

http://127.0.0.1:5984/

I have version 1.4.0 running. Let's create a database. If you have terminal ready, you can copy and paste below and check the database created.

 user@localhost:~$ curl -X PUT http://127.0.0.1:5984/wiki  
 {"ok":true}  
 user@localhost:~$ curl -X PUT http://127.0.0.1:5984/wiki  
 {"error":"file_exists","reason":"The database could not be created, the file already exists."}  
 user@localhost:~$ curl http://127.0.0.1:5984/wiki  
 {"db_name":"wiki","doc_count":0,"doc_del_count":0,"update_seq":0,"purge_seq":0,"compact_running":false,"disk_size":79,"data_size":0,"instance_start_time":"1440083544219325","disk_format_version":6,"committed_update_seq":0}  
 user@localhost:~$ curl -X GET http://127.0.0.1:5984/_all_dbs  
 ["_replicator","_users","wiki"]  

couchdb has json output, key value output and error handling is pretty good. Very speedy too! okay now, let's try on crud on couchdb and we will do that using curl.

 user@localhost:~$ curl -X POST -H "Content-Type: application/json" --data '{ "text" : "Wikipedia on CouchDB", "rating": 5 }' http://127.0.0.1:5984/wiki  
 {"ok":true,"id":"4c6a6dce960e16aba7e50d02c9001241","rev":"1-80fd6f7aeb55c83c8999b4613843af5d"}  
   
 user@localhost:~$ curl -X GET -H "Content-Type: application/json" http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"_id":"4c6a6dce960e16aba7e50d02c9001241","_rev":"1-80fd6f7aeb55c83c8999b4613843af5d","text":"Wikipedia on CouchDB","rating":5}  

first, we create a new document by http post with data in json format. Then using the id that couchdb generated, we get that document data back.

 user@localhost:~$ curl -X PUT -H "Content-Type: application/json" --data '{ "text" : "Wikipedia on CouchDB", "rating": 6, "_rev": "1-80fd6f7aeb55c83c8999b4613843af5d" }' http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"ok":true,"id":"4c6a6dce960e16aba7e50d02c9001241","rev":"2-b7248b6af9b6efcea5a8fe8cc299a85c"}  
 user@localhost:~$ curl -X GET -H "Content-Type: application/json" http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"_id":"4c6a6dce960e16aba7e50d02c9001241","_rev":"2-b7248b6af9b6efcea5a8fe8cc299a85c","text":"Wikipedia on CouchDB","rating":6}  
 user@localhost:~$ curl -X PUT -H "Content-Type: application/json" --data '{ "views" : 10, "_rev": "2-b7248b6af9b6efcea5a8fe8cc299a85c" }' http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"ok":true,"id":"4c6a6dce960e16aba7e50d02c9001241","rev":"3-9d1c59138f909760f9de6e5ce63c3a4e"}  
 user@localhost:~$ curl -X GET -H "Content-Type: application/json" http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"_id":"4c6a6dce960e16aba7e50d02c9001241","_rev":"3-9d1c59138f909760f9de6e5ce63c3a4e","views":10}  

As you can read above, we update this document twice, first, we update by increase rating to 6 and appending _rev using a unique id generated from couchdb. If this update is a success, notice that rev is increase by 1? note 2- on rev. On the second update, we update the view to 10 but essentially in couchdb, everything in this doc is wipe and insert a new key called views. So note if you want to update a document, you should send in all existing value before do the update.

Finally, now we delete this document. Very easy, see below.

 user@localhost:~$ curl -X DELETE -H "Content-Type: application/json" http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241?rev=3-9d1c59138f909760f9de6e5ce63c3a4e  
 {"ok":true,"id":"4c6a6dce960e16aba7e50d02c9001241","rev":"4-aa379357deff42739d2fc77aea38dde1"}  
 user@localhost:~$ curl -X GET -H "Content-Type: application/json" http://127.0.0.1:5984/wiki/4c6a6dce960e16aba7e50d02c9001241  
 {"error":"not_found","reason":"deleted"}  
 user@localhost:~$ curl -X DELETE http://127.0.0.1:5984/wiki  
 {"ok":true}  
 user@localhost:~$ curl -X GET http://127.0.0.1:5984/_all_dbs  
 ["_replicator","_users"]  

couchdb is very good and efficient of handling document, it certainly definitely earn it spot for beginner to dwelve deeper of its capability. If you think so, take a look at this link. That's it for today, have fun learning!

Saturday, November 7, 2015

Apache accumulo first learning experience


Today we will take a look into another big data technology. Apache accumulo is the topic for today. First, what is accumulo?

Apache Accumulo is based on Google's BigTable design and is built on top of Apache Hadoop, Zookeeper, and Thrift. Apache Accumulo features a few novel improvements on the BigTable design in the form of cell-based access control and a server-side programming mechanism that can modify key/value pairs at various points in the data management process. Other notable improvements and feature are outlined here.
Google published the design of BigTable in 2006. Several other open source projects have implemented aspects of this design including HBase, Hypertable, and Cassandra. Accumulo began its development in 2008 and joined the Apache community in 2011.

In this article, as always, we will setup the infrastructure. I reference this article with the following environment.

  • 64bit arch
  • open jdk 1.7/1.8
  • zookeeper-3.4.6
  • hadoop-2.6.1
  • accumulo-1.7.0
  • openssh 
  • rsync
  • debian sid

As accumulo is java based project, you must installed and configured java. Get latest java 1.7 or 1.8 as of this writing. After java is installed you need to export JAVA_HOME in your bash configuration file, .bashrc with this line export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55

Then you need to source the new .bashrc. . .bashrc is sufficient. For ssh and rsync, you can use apt-get package manager as it is easy. What's important is that, you should enable public and private key in your user configuration ssh directory. 

You can create two directories, $HOME/Downloads and $HOME/Installs respectively. It's pretty intuitive, the downloads directory is for the package downloaded and the install is the working directory after the compress packages are downloaded.


Download the above packages into the $HOME/Downloads directory and extracted into $HOME/Installs. First, let's configure apache hadoop.

 $ vim $HOME/Installs/hadoop-2.6.1/etc/hadoop/hadoop-env.sh  
 $ # uncomment in the file above export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55  
 $ vim $HOME/Installs/hadoop-2.6.1/etc/hadoop/core-site.xml  
 $ cat $HOME/Installs/hadoop-2.6.1/etc/hadoop/core-site.xml  
 <?xml version="1.0" encoding="UTF-8"?>  
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
 <configuration>  
   <property>  
     <name>fs.defaultFS</name>  
     <value>hdfs://localhost:9000</value>  
   </property>  
 </configuration>  
 $ vim $HOME/Installs/hadoop-2.6.1/etc/hadoop/hdfs-site.xml  
 $ cat $HOME/Installs/hadoop-2.6.1/etc/hadoop/hdfs-site.xml  
 <?xml version="1.0" encoding="UTF-8"?>  
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
 <configuration>  
   <property>  
     <name>dfs.replication</name>  
     <value>1</value>  
   </property>  
   <property>  
     <name>dfs.name.dir</name>  
     <value>hdfs_storage/name</value>  
   </property>  
   <property>  
     <name>dfs.data.dir</name>  
     <value>hdfs_storage/data</value>  
   </property>  
 </configuration>  
 $ vim $HOME/Installs/hadoop-2.6.1/etc/hadoop/mapred-site.xml  
 $ cat $HOME/Installs/hadoop-2.6.1/etc/hadoop/mapred-site.xml  
 <?xml version="1.0"?>  
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
 <configuration>  
    <property>  
      <name>mapred.job.tracker</name>  
      <value>localhost:9001</value>  
    </property>  
 </configuration>  
 $ cd $HOME/Installs/hadoop-2.6.1/  
 $ $HOME/Installs/hadoop-2.6.1/bin/hdfs namenode -format  
 $ $HOME/Installs/hadoop-2.6.1/sbin/start-dfs.sh  

As you can read above, we specify the java home for hadoop and then we configure hadoop to run on port 9000, so make sure this port is free for hadoop to use. Then we format the hadoop namenode and start the hadoop.

Next we will configure zookeeper.

 $ cp $HOME/Installs/zookeeper-3.4.6/conf/zoo_sample.cfg $HOME/Installs/zookeeper-3.4.6/conf/zoo.cfg  
 $ $HOME/Installs/zookeeper-3.4.6/bin/zkServer.sh start  

Pretty simple, get the default config file and start the services. Last steps is the apache accumulo.

 $ cp $HOME/Installs/accumulo-1.7.0/conf/examples/512MB/standalone/* $HOME/Installs/accumulo-1.7.0/conf/  
 $ vim $HOME/.bashrc  
 $ tail -2 $HOME/.bashrc  
 export HADOOP_HOME=$HOME/Installs/hadoop-2.6.1/  
 export ZOOKEEPER_HOME=$HOME/Installs/zookeeper-3.4.6/  
 $ . $HOME/.bashrc  
 $ vim $HOME/Installs/accumulo-1.7.0/conf/accumulo-env.sh  
 $ # SET ACCUMULO_MONITOR_BIND_ALL to true.  
 $ vim $HOME/Installs/accumulo-1.7.0/conf/accumulo-site.xml  
 $ # in file $HOME/Installs/accumulo-1.7.0/conf/accumulo-site.xml  
 <property>  
   <name>instance.volumes</name>  
   <value>hdfs://localhost:9000/accumulo</value>  
 </property>  
 $ # in file $HOME/Installs/accumulo-1.7.0/conf/accumulo-site.xml   
   <name>instance.secret</name>  
   <value>mysecret</value>  
 $ # in file $HOME/Installs/accumulo-1.7.0/conf/accumulo-site.xml    
  <property>  
   <name>trace.token.property.password</name>  
   <value>my scret</value>  
  </property>  

So we have configure the setting for accumulo in .bashrc and some properties settings in accumulo-env.sh and accumulo-site.xml . Next, we will initialize accumulo and start it using the password we specify previously.

 $ $HOME/Installs/accumulo-1.7.0/bin/accumulo init  
 $ # give a instance name.  
 $ # type in the password as specify in trace.token.property.password.  
 $ $HOME/Installs/accumulo-1.7.0/bin/start-all.sh  

That's it! If you want to do CRUD in accumulo, I suggest you go with this official documentation.





Friday, November 6, 2015

Learning basic of apache karaf

Recently container is such a hot topic especially from docker and today, we will look into another container from apache. Today, we will take a look into Apache Karaf.

What is Apache Karaf?

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

If you have no idea what does that means, maybe a simple quick how to give some idea. First, let's download a copy of apache karaf and you can do that just here. At the time of this learning experience, I'm using Apache Karaf version 4.0.1. Then extract to a path so that will be karaf home directory.

 user@localhost:~/Desktop$ ll apache-karaf-4.0.1.tar.gz   
 -rw-r----- 1 user user 16M Oct 5 22:35 apache-karaf-4.0.1.tar.gz  
 user@localhost:~/Desktop$ tar -zxf apache-karaf-4.0.1.tar.gz   
 user@localhost:~/Desktop$ cd apache-karaf-4.0.1  
 apache-karaf-4.0.1/    apache-karaf-4.0.1.tar.gz   
 user@localhost:~/Desktop$ cd apache-karaf-4.0.1  
 user@localhost:~/Desktop/apache-karaf-4.0.1$ ls  
 bin data demos deploy etc lib LICENSE NOTICE README RELEASE-NOTES system  

So apache is about 16MB compressed and contain a few directories to work with.

The directory layout of a Karaf installation is as follows:
/bin: control scripts to start, stop, login.
/demos: contains some simple Karaf samples
/etc: configuration files
/data: working directory
/cache: OSGi framework bundle cache
/generated-bundles: temporary folder used by the deployers
/log: log files
/deploy: hot deploy directory
/instances: directory containing instances
/lib: contains libraries
/lib/boot: contains the system libraries used at Karaf bootstrap
/lib/endorsed: directory for endorsed libraries
/lib/ext: directory for JRE extensions
/system: OSGi bundles repository, laid out as a Maven 2 repository

Let's launch karaf, see screenshot below in the terminal. Let's add apache camel repository into apache karaf and then install on it. We will therefore using this as a sample for this learning experience.



 karaf@root()> feature:repo-add camel 2.15.3  
 Adding feature url mvn:org.apache.camel.karaf/apache-camel/2.15.3/xml/features  
 karaf@root()> feature:info camel  
 Feature camel 2.15.3  
 Feature has no configuration  
 Feature has no configuration files  
 Feature depends on:  
  camel-core 2.15.3  
  camel-spring 2.15.3  
  camel-blueprint 2.15.3  
 Feature has no bundles.  
 Feature has no conditionals.  
 karaf@root()> feature:install camel-spring  
 karaf@root()> bundle:install -s mvn:org.apache.camel/camel-example-osgi/2.15.3  
 Bundle ID: 82  
 karaf@root()> log:display  
 2015-10-05 22:41:37,872 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.core:type=bundleState,version=1.7,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:37,876 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:37,876 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.core:type=wiringState,version=1.1,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:37,877 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.core:type=framework,version=1.7,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:37,878 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.core:type=packageState,version=1.5,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:37,878 | INFO | pool-17-thread-1 | core               | 17 - org.apache.aries.jmx.core - 1.1.3 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@dd1e765 with name osgi.core:type=serviceState,version=1.7,framework=org.apache.felix.framework,uuid=e7d79bed-237a-4c4d-b912-920b57fef63b  
 2015-10-05 22:41:38,145 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.bundle.core/4.0.1  
 2015-10-05 22:41:38,168 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.config.core/4.0.1  
 2015-10-05 22:41:38,178 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.deployer.kar/4.0.1  
 2015-10-05 22:41:38,180 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.0.1  
 2015-10-05 22:41:38,204 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.features.command/4.0.1  
 2015-10-05 22:41:38,230 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.instance.core/4.0.1  
 2015-10-05 22:41:38,256 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.jaas.command/4.0.1  
 2015-10-05 22:41:38,259 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Updating commands for bundle org.apache.karaf.jaas.command/4.0.1  
 2015-10-05 22:41:38,260 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Updating commands for bundle org.apache.karaf.jaas.command/4.0.1  
 2015-10-05 22:41:38,266 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.kar.core/4.0.1  
 2015-10-05 22:41:38,277 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.log.core/4.0.1  
 2015-10-05 22:41:38,281 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.package.core/4.0.1  
 2015-10-05 22:41:38,285 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.service.core/4.0.1  
 2015-10-05 22:41:38,327 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Command registration delayed for bundle org.apache.karaf.shell.commands/4.0.1. Missing dependencies: [org.jledit.EditorFactory]  
 2015-10-05 22:41:38,581 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.0.1. Missing dependencies: [org.apache.sshd.SshServer]  
 2015-10-05 22:41:38,593 | INFO | pool-23-thread-1 | SecurityUtils          | 47 - org.apache.sshd.core - 0.14.0 | BouncyCastle not registered, using the default JCE provider  
 2015-10-05 22:41:38,624 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.system.core/4.0.1  
 2015-10-05 22:41:38,631 | INFO | FelixStartLevel | CommandExtension         | 43 - org.apache.karaf.shell.core - 4.0.1 | Registering commands for bundle org.apache.karaf.shell.commands/4.0.1  
 2015-10-05 23:18:09,193 | INFO | nsole user karaf | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 | Adding features: camel-spring/[2.15.3,2.15.3]  
 2015-10-05 23:19:30,370 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 | Changes to perform:  
 2015-10-05 23:19:30,371 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  Region: root  
 2015-10-05 23:19:30,371 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |   Bundles to install:  
 2015-10-05 23:19:30,371 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.camel/camel-catalog/2.15.3  
 2015-10-05 23:19:30,371 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.camel/camel-commands-core/2.15.3  
 2015-10-05 23:19:30,372 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.camel/camel-core/2.15.3  
 2015-10-05 23:19:30,372 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.camel/camel-spring/2.15.3  
 2015-10-05 23:19:30,372 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.camel.karaf/camel-karaf-commands/2.15.3  
 2015-10-05 23:19:30,373 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1  
 2015-10-05 23:19:30,373 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.springstate/4.0.1  
 2015-10-05 23:19:30,373 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.spring/4.0.1  
 2015-10-05 23:19:30,373 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.aopalliance/1.0_6  
 2015-10-05 23:19:30,374 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.cglib/3.0_1  
 2015-10-05 23:19:30,374 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jaxb-impl/2.2.6_1  
 2015-10-05 23:19:30,374 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-aop/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,374 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-beans/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,375 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-context/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,375 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-context-support/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,375 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-core/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,375 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-expression/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,376 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-tx/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,376 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-aop/3.1.4.RELEASE  
 2015-10-05 23:19:30,376 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-asm/3.1.4.RELEASE  
 2015-10-05 23:19:30,377 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-beans/3.1.4.RELEASE  
 2015-10-05 23:19:30,377 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-context/3.1.4.RELEASE  
 2015-10-05 23:19:30,377 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-context-support/3.1.4.RELEASE  
 2015-10-05 23:19:30,377 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-core/3.1.4.RELEASE  
 2015-10-05 23:19:30,378 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework/spring-expression/3.1.4.RELEASE  
 2015-10-05 23:19:30,378 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework.osgi/spring-osgi-core/1.2.1  
 2015-10-05 23:19:30,378 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework.osgi/spring-osgi-extender/1.2.1  
 2015-10-05 23:19:30,378 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework.osgi/spring-osgi-annotation/1.2.1  
 2015-10-05 23:19:30,379 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.springframework.osgi/spring-osgi-io/1.2.1  
 2015-10-05 23:19:30,379 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.codehaus.woodstox/stax2-api/3.1.4  
 2015-10-05 23:19:30,380 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |    mvn:org.codehaus.woodstox/woodstox-core-asl/4.4.1  
 2015-10-05 23:19:30,383 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 | Installing bundles:  
 2015-10-05 23:19:30,383 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.camel/camel-catalog/2.15.3  
 2015-10-05 23:19:30,393 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.camel/camel-commands-core/2.15.3  
 2015-10-05 23:19:30,399 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.camel/camel-core/2.15.3  
 2015-10-05 23:19:30,436 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.camel/camel-spring/2.15.3  
 2015-10-05 23:19:30,447 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.camel.karaf/camel-karaf-commands/2.15.3  
 2015-10-05 23:19:30,451 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1  
 2015-10-05 23:19:30,453 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.springstate/4.0.1  
 2015-10-05 23:19:30,458 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.spring/4.0.1  
 2015-10-05 23:19:30,462 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.aopalliance/1.0_6  
 2015-10-05 23:19:30,465 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.cglib/3.0_1  
 2015-10-05 23:19:30,471 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jaxb-impl/2.2.6_1  
 2015-10-05 23:19:30,485 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-aop/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,498 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-beans/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,510 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-context/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,536 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-context-support/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,545 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-core/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,560 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-expression/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,567 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.spring-tx/3.2.14.RELEASE_1  
 2015-10-05 23:19:30,576 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-aop/3.1.4.RELEASE  
 2015-10-05 23:19:30,585 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-asm/3.1.4.RELEASE  
 2015-10-05 23:19:30,590 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-beans/3.1.4.RELEASE  
 2015-10-05 23:19:30,606 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-context/3.1.4.RELEASE  
 2015-10-05 23:19:30,636 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-context-support/3.1.4.RELEASE  
 2015-10-05 23:19:30,655 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-core/3.1.4.RELEASE  
 2015-10-05 23:19:30,667 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework/spring-expression/3.1.4.RELEASE  
 2015-10-05 23:19:30,673 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework.osgi/spring-osgi-core/1.2.1  
 2015-10-05 23:19:30,684 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework.osgi/spring-osgi-extender/1.2.1  
 2015-10-05 23:19:30,691 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework.osgi/spring-osgi-annotation/1.2.1  
 2015-10-05 23:19:30,696 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.springframework.osgi/spring-osgi-io/1.2.1  
 2015-10-05 23:19:30,702 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.codehaus.woodstox/stax2-api/3.1.4  
 2015-10-05 23:19:30,708 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  mvn:org.codehaus.woodstox/woodstox-core-asl/4.4.1  
 2015-10-05 23:19:31,219 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 | Starting bundles:  
 2015-10-05 23:19:31,337 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.aopalliance/1.0.0.6  
 2015-10-05 23:19:31,339 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.asm/3.1.4.RELEASE  
 2015-10-05 23:19:31,341 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.core/3.1.4.RELEASE  
 2015-10-05 23:19:31,343 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.beans/3.1.4.RELEASE  
 2015-10-05 23:19:31,344 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.aop/3.1.4.RELEASE  
 2015-10-05 23:19:31,346 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.expression/3.1.4.RELEASE  
 2015-10-05 23:19:31,347 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.context/3.1.4.RELEASE  
 2015-10-05 23:19:31,349 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.cglib/3.0.0.1  
 2015-10-05 23:19:31,352 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-core/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,354 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-beans/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,356 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-aop/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,357 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.osgi.io/1.2.1  
 2015-10-05 23:19:31,359 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-expression/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,360 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-context/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,362 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.osgi.core/1.2.1  
 2015-10-05 23:19:31,364 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.osgi.extensions.annotations/1.2.1  
 2015-10-05 23:19:31,366 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.osgi.extender/1.2.1  
 2015-10-05 23:19:31,393 | INFO | pool-25-thread-1 | ContextLoaderListener      | 77 - org.springframework.osgi.extender - 1.2.1 | Starting [org.springframework.osgi.extender] bundle v.[1.2.1]  
 2015-10-05 23:19:31,727 | INFO | pool-25-thread-1 | ExtenderConfiguration      | 77 - org.springframework.osgi.extender - 1.2.1 | No custom extender configuration detected; using defaults...  
 2015-10-05 23:19:31,740 | INFO | pool-25-thread-1 | TimerTaskExecutor        | 64 - org.apache.servicemix.bundles.spring-context - 3.2.14.RELEASE_1 | Initializing Timer  
 2015-10-05 23:19:31,839 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.springframework.context.support/3.1.4.RELEASE  
 2015-10-05 23:19:31,841 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-tx/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,842 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.karaf.deployer.spring/4.0.1  
 2015-10-05 23:19:31,854 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.spring-context-support/3.2.14.RELEASE_1  
 2015-10-05 23:19:31,856 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.karaf.bundle.springstate/4.0.1  
 2015-10-05 23:19:31,916 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.camel.camel-catalog/2.15.3  
 2015-10-05 23:19:31,922 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.camel.camel-spring/2.15.3  
 2015-10-05 23:19:31,939 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.camel.camel-core/2.15.3  
 2015-10-05 23:19:31,943 | INFO | pool-25-thread-1 | Activator            | 53 - org.apache.camel.camel-core - 2.15.3 | Camel activator starting  
 2015-10-05 23:19:31,982 | INFO | pool-25-thread-1 | Activator            | 53 - org.apache.camel.camel-core - 2.15.3 | Camel activator started  
 2015-10-05 23:19:32,007 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.camel.camel-commands-core/2.15.3  
 2015-10-05 23:19:32,012 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  stax2-api/3.1.4  
 2015-10-05 23:19:32,014 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  woodstox-core-asl/4.4.1  
 2015-10-05 23:19:32,051 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.geronimo.specs.geronimo-jta_1.1_spec/1.1.1  
 2015-10-05 23:19:32,054 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.servicemix.bundles.jaxb-impl/2.2.6.1  
 2015-10-05 23:19:32,056 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 |  org.apache.camel.karaf.camel-karaf-commands/2.15.3  
 2015-10-05 23:19:32,315 | INFO | pool-25-thread-1 | FeaturesServiceImpl       | 8 - org.apache.karaf.features.core - 4.0.1 | Done.  
 2015-10-05 23:20:53,179 | INFO | nsole user karaf | ultOsgiApplicationContextCreator | 77 - org.springframework.osgi.extender - 1.2.1 | Discovered configurations {osgibundle:/META-INF/spring/*.xml} in bundle [camel-example-osgi (camel-example-osgi)]  
 2015-10-05 23:20:53,219 | INFO | ExtenderThread-1 | OsgiBundleXmlApplicationContext | 64 - org.apache.servicemix.bundles.spring-context - 3.2.14.RELEASE_1 | Refreshing OsgiBundleXmlApplicationContext(bundle=camel-example-osgi, config=osgibundle:/META-INF/spring/*.xml): startup date [Mon Oct 05 23:20:53 MYT 2015]; root of context hierarchy  
 2015-10-05 23:20:53,262 | INFO | ExtenderThread-1 | OsgiBundleXmlApplicationContext | 64 - org.apache.servicemix.bundles.spring-context - 3.2.14.RELEASE_1 | Application Context service already unpublished  
 2015-10-05 23:20:53,325 | INFO | ExtenderThread-1 | XmlBeanDefinitionReader     | 63 - org.apache.servicemix.bundles.spring-beans - 3.2.14.RELEASE_1 | Loading XML bean definitions from URL [bundle://82.0:0/META-INF/spring/camelContext.xml]  
 2015-10-05 23:20:53,611 | INFO | ExtenderThread-1 | CamelNamespaceHandler      | 54 - org.apache.camel.camel-spring - 2.15.3 | OSGi environment detected.  
 2015-10-05 23:20:54,970 | INFO | ExtenderThread-1 | WaiterApplicationContextExecutor | 77 - org.springframework.osgi.extender - 1.2.1 | No outstanding OSGi service dependencies, completing initialization for OsgiBundleXmlApplicationContext(bundle=camel-example-osgi, config=osgibundle:/META-INF/spring/*.xml)  
 2015-10-05 23:20:55,034 | INFO | ExtenderThread-2 | DefaultListableBeanFactory    | 63 - org.apache.servicemix.bundles.spring-beans - 3.2.14.RELEASE_1 | Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@69f2fae4: defining beans [template,consumerTemplate,camel-1:beanPostProcessor,camel-1,myTransform]; root of factory hierarchy  
 2015-10-05 23:20:55,302 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | Apache Camel 2.15.3 (CamelContext: camel-1) is starting  
 2015-10-05 23:20:55,304 | INFO | ExtenderThread-2 | ManagedManagementStrategy    | 53 - org.apache.camel.camel-core - 2.15.3 | JMX is enabled  
 2015-10-05 23:20:55,651 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.  
 2015-10-05 23:20:55,651 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html  
 2015-10-05 23:20:55,742 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | Route: route1 started and consuming from: Endpoint[timer://myTimer?fixedRate=true&period=2000]  
 2015-10-05 23:20:55,745 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | Total 1 routes, of which 1 is started.  
 2015-10-05 23:20:55,752 | INFO | ExtenderThread-2 | OsgiSpringCamelContext      | 53 - org.apache.camel.camel-core - 2.15.3 | Apache Camel 2.15.3 (CamelContext: camel-1) started in 0.444 seconds  
 2015-10-05 23:20:55,759 | INFO | ExtenderThread-2 | OsgiBundleXmlApplicationContext | 64 - org.apache.servicemix.bundles.spring-context - 3.2.14.RELEASE_1 | Publishing application context as OSGi service with properties {org.springframework.context.service.name=camel-example-osgi, Bundle-SymbolicName=camel-example-osgi, Bundle-Version=2.15.3}  
 2015-10-05 23:20:55,775 | INFO | ExtenderThread-2 | ContextLoaderListener      | 77 - org.springframework.osgi.extender - 1.2.1 | Application context successfully refreshed (OsgiBundleXmlApplicationContext(bundle=camel-example-osgi, config=osgibundle:/META-INF/spring/*.xml))  
 2015-10-05 23:20:56,759 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:20:56 MYT 2015  
 2015-10-05 23:20:56,766 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:20:56 MYT 2015]  
 2015-10-05 23:20:58,746 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:20:58 MYT 2015  
 2015-10-05 23:20:58,747 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:20:58 MYT 2015]  
 2015-10-05 23:21:00,746 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:21:00 MYT 2015  
 2015-10-05 23:21:00,747 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:21:00 MYT 2015]  
 2015-10-05 23:21:02,746 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:21:02 MYT 2015  
 2015-10-05 23:21:02,747 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:21:02 MYT 2015]  
 2015-10-05 23:21:04,745 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:21:04 MYT 2015  
 2015-10-05 23:21:04,746 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:21:04 MYT 2015]  
 2015-10-05 23:21:06,745 | INFO | timer://myTimer | MyTransform           | 82 - camel-example-osgi - 2.15.3 | >>>> SpringDSL set body: Mon Oct 05 23:21:06 MYT 2015  
 2015-10-05 23:21:06,746 | INFO | timer://myTimer | ExampleRouter          | 53 - org.apache.camel.camel-core - 2.15.3 | Exchange[ExchangePattern: InOnly, BodyType: String, Body: SpringDSL set body: Mon Oct 05 23:21:06 MYT 2015]  
   
 karaf@root()>  

and if you list again on the bundle, there are multiple instances running.

 karaf@root()> bundle:list  
 START LEVEL 100 , List Threshold: 50  
 ID | State | Lvl | Version | Name  
 -----------------------------------------------------------------------  
 51 | Active | 80 | 2.15.3 | camel-catalog  
 52 | Active | 80 | 2.15.3 | camel-commands-core  
 53 | Active | 80 | 2.15.3 | camel-core  
 54 | Active | 80 | 2.15.3 | camel-spring  
 55 | Active | 80 | 2.15.3 | camel-karaf-commands  
 56 | Active | 80 | 1.1.1  | geronimo-jta_1.1_spec  
 61 | Active | 80 | 2.2.6.1 | Apache ServiceMix :: Bundles :: jaxb-impl  
 80 | Active | 80 | 3.1.4  | Stax2 API  
 81 | Active | 80 | 4.4.1  | Woodstox XML-processor  
 82 | Active | 80 | 2.15.3 | camel-example-osgi  
 karaf@root()>   

Just like docker contaianer, you can stop and uninstall the bundle and then exit apache karaf.

 karaf@root()> bundle:stop camel-example-osgi  
 karaf@root()> bundle:uninstall camel-example-osgi  
 karaf@root()> bundle:list  
 START LEVEL 100 , List Threshold: 50  
 ID | State | Lvl | Version | Name  
 -----------------------------------------------------------------------  
 51 | Active | 80 | 2.15.3 | camel-catalog  
 52 | Active | 80 | 2.15.3 | camel-commands-core  
 53 | Active | 80 | 2.15.3 | camel-core  
 54 | Active | 80 | 2.15.3 | camel-spring  
 55 | Active | 80 | 2.15.3 | camel-karaf-commands  
 56 | Active | 80 | 1.1.1  | geronimo-jta_1.1_spec  
 61 | Active | 80 | 2.2.6.1 | Apache ServiceMix :: Bundles :: jaxb-impl  
 80 | Active | 80 | 3.1.4  | Stax2 API  
 81 | Active | 80 | 4.4.1  | Woodstox XML-processor  
 karaf@root()>   
 karaf@root()> system:shutdown  
 Confirm: halt instance root (yes/no): yes  
 karaf@root()>   
   

In this article, we are just going through the surface on what apache karaf can do and it certainly deliver! If you are looking for docker container alternative, apache karaf certainly worth the time to look into. Hence forth I think these links will provide you further learning experience.

http://liquid-reality.de/display/liquid/Karaf+Tutorials

https://karaf.apache.org/manual/latest/users-guide/

Sunday, October 25, 2015

Learning Java Eden Space


If you have been a java developer and you should came across java garbage collection that free the object created by your application from occupied all the java heap. In today article, we will look into java heap and particular into java eden space. First, let's look at the general java heap.

From this StackOverflow

Heap memory

The heap memory is the runtime data area from which the Java VM allocates memory for all class instances and arrays. The heap may be of a fixed or variable size. The garbage collector is an automatic memory management system that reclaims heap memory for objects.

Eden Space: The pool from which memory is initially allocated for most objects.

Survivor Space: The pool containing objects that have survived the garbage collection of the Eden space.

Tenured Generation: The pool containing objects that have existed for some time in the survivor space.

When you created a new object, jvm allocate a part of the heap for your object. Visually, it is something as of following.

                   +-----+  
                   |     |  
   <-minor gc->    v     v   <------------- major gc---------------------->  
   +------------+-----+-----+----------------------------------------------+-------------+  
   |            |     |     |                                              |             |
   | Eden       | S0  | S1  |  Tenure Generation                           | Perm gen    |
   |            |     |     |                                              |             |
   +------------+-----+-----+----------------------------------------------+-------------+  
    <---------------------jvm heap (-Xms -Xmx)----------------------------> -XX:PermSize  
    <-- young gen(-Xmn)---->                                                -XX:MaxPermSize  

When eden space is fill with object and minor gc is performed, survive objects will copy to either survivor spaces; s0 or s1. At a time, one of the survivor space is empty. Because the eden space are relatively small in comparison to the tenure generation, hence, the gc that happened in eden space is quick.  Eden and both survivors spaces are also known as young or new generation.

To understand into how young generation heap get free, this article provided detail explanation.

The Sun/Oracle HotSpot JVM further divides the young generation into three sub-areas: one large area named "Eden" and two smaller "survivor spaces" named "From" and "To". As a rule, new objects are allocated in "Eden" (with the exception that if a new object is too large to fit into "Eden" space, it will be directly allocated in the old generation). During a GC, the live objects in "Eden" first move into the survivor spaces and stay there until they have reached a certain age (in terms of numbers of GCs passed since their creation), and only then they are transferred to the old generation. Thus, the role of the survivor spaces is to keep young objects in the young generation for a little longer than just their first GC, in order to be able to still collect them quickly should they die soon afterwards.
Based on the assumption that most of the young objects may be deleted during a GC, a copying strategy ("copy collection") is being used for young generation GC. At the beginning of a GC, the survivor space "To" is empty and objects can only exist in "Eden" or "From". Then, during the GC, all objects in "Eden" that are still being referenced are moved into "To". Regarding "From", the still referenced objects in this space are handled depending on their age. If they have not reached a certain age ("tenuring threshold"), they are also moved into "To". Otherwise they are moved into the old generation. At the end of this copying procedure, "Eden" and "From" can be considered empty (because they only contain dead objects), and all live objects in the young generation are located in "To". Should "to" fill up at some point during the GC, all remaining objects are moved into the old generation instead (and will never return). As a final step, "From" and "To" swap their roles (or, more precisely, their names) so that "To" is empty again for the next GC and "From" contains all remaining young objects.

As you can observed based on the visual diagram above, you can set the amount of heap for the eden and survivor space using -Xmn in the java parameter. There is also -XX:SurvivorRatio=ratio and you can find further information here for java8. Note that in the diagram above, Perm gen has been removed in java8, hence always refer find out what java run your application and refer to the right version of java documentation.

If you want to monitor the statistics of eden , you can use jstats. Previously I have written an article about jstat and you can read here what is jstat and how to use it. You can also enable gc log statistics and so jvm will write the gc statistics into a file, you can further read more here.

Till then we meet again in the next article. Please consider donate, thank you!

Saturday, October 24, 2015

Study MongoDB security by setup and configure server and client on secure line

It's been a while since my last learning on MongoDB. The last learning on MongoDB was on administration. Today, we will learn another topic of mongoDB; MongoDB security. As a general for MongoDB security context, it means

Maintaining a secure MongoDB deployment requires administrators to implement controls to ensure that users and applications have access to only the data that they require. MongoDB provides features that allow administrators to implement these controls and restrictions for any MongoDB deployment.

This article is reference the official documentation which can be found here. As the security context is pretty huge, in this short article, we will focus how to setup mongdb server to use on ssl and how client will access the database resource securely.

First, make sure you have install the server and client package. If you are on deb package linux distribution, it is as easy as sudo apt-get install mongodb-clients mongodb-server. Once both packages are install, you can check in the log file at /var/log/mongodb/mongodb.log similar such as the following. So our mongodb version is 2.6.3 and it has support using openssl library.

 2015-09-27T16:04:48.849+0800 [initandlisten] db version v2.6.3  
 2015-09-27T16:04:48.849+0800 [initandlisten] git version: nogitversion  
 2015-09-27T16:04:48.849+0800 [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014  

Next, let's generate a public and private key and a self sign certifcate.

 user@localhost:~/test1$ openssl req -newkey rsa:2048 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key  
 Generating a 2048 bit RSA private key  
 .............................+++  
 ..................................................................................................................................................................................................................+++  
 writing new private key to 'mongodb-cert.key'  
 -----  
 You are about to be asked to enter information that will be incorporated  
 into your certificate request.  
 What you are about to enter is what is called a Distinguished Name or a DN.  
 There are quite a few fields but you can leave some blank  
 For some fields there will be a default value,  
 If you enter '.', the field will be left blank.  
 -----  
 Country Name (2 letter code) [AU]:MY  
 State or Province Name (full name) [Some-State]:KL  
 Locality Name (eg, city) []:Kuala Lumpur  
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.com  
 Organizational Unit Name (eg, section) []:Engineering  
 Common Name (e.g. server FQDN or YOUR name) []:Jason Wee  
 Email Address []:jason@example.com  
 user@localhost:~/test1$ ls  
 mongodb-cert.crt mongodb-cert.key  

Now put everything into a file with extension .pem.

 user@localhost:~/test1$ cat mongodb-cert.key mongodb-cert.crt > mongodb.pem  

Now, stop mongodb instance if it is running. As we will now configured the server to use the certificate we generated previously.

 user@localhost:~/test1$ sudo systemctl status mongodb  
 ● mongodb.service - An object/document-oriented database  
   Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)  
   Active: inactive (dead) since Sun 2015-09-27 16:13:34 MYT; 23min ago  
    Docs: man:mongod(1)  
  Main PID: 15343 (code=exited, status=0/SUCCESS)  
   
 Sep 27 16:04:48 localhost systemd[1]: Started An object/document-oriented database.  
 Sep 27 16:04:48 localhost systemd[1]: Starting An object/document-oriented database...  
 Sep 27 16:13:33 localhost systemd[1]: Stopping An object/document-oriented database...  
 Sep 27 16:13:34 localhost systemd[1]: Stopped An object/document-oriented database.  
 Sep 27 16:36:30 localhost systemd[1]: Stopped An object/document-oriented database.  
 user@localhost:~/test1$ sudo tail -10 /etc/mongodb.conf   
 # Size limit for in-memory storage of op ids.  
 #opIdMem = <bytes>  
   
 # SSL options  
 # Enable SSL on normal ports  
 sslOnNormalPorts = true  
 # SSL Key file and password  
 #sslPEMKeyFile = /etc/ssl/mongodb.pem  
 sslPEMKeyFile = /home/user/test1/mongodb.pem  
 #sslPEMKeyPassword = pass  
 user@localhost:~/test1$   

In the above output, as an example, I have set the file mongodb.pem to the configuration sslPEMKeyFile and also set sslOnNormalPorts to true. Now if you start mongodb instance.

 user@localhost:~/test1$ sudo systemctl start mongodb  
 user@localhost:~/test1$   

In the log file, noticed that ssl is enabled and no ssl related error.

 2015-09-27T16:46:41.648+0800 [initandlisten] options: { config: "/etc/mongodb.conf", net: { bindIp: "127.0.0.1", ssl: { PEMKeyFile: "/home/user/test1/mongodb.pem", mode: "requireSSL" } }, storage: { dbPath: "/var/lib/mongodb", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongodb.log" } }  
 2015-09-27T16:46:41.788+0800 [initandlisten] journal dir=/var/lib/mongodb/journal  
 2015-09-27T16:46:41.797+0800 [initandlisten] recover : no journal files present, no recovery needed  
 2015-09-27T16:46:42.162+0800 [initandlisten] waiting for connections on port 27017 ssl  

On the server configuration and setup, it is now done. Now, we will focus on the mongdb client. If you connect to mongodb using its client, you will get error.

 user@localhost:~/test1$ mongo foo  
 MongoDB shell version: 2.6.3  
 connecting to: foo  
 2015-09-27T17:22:54.300+0800 DBClientCursor::init call() failed  
 2015-09-27T17:22:54.302+0800 Error: DBClientBase::findN: transport error: 127.0.0.1:27017 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:146  
 exception: connect failed  
 user@localhost:~/test1$ mongo --ssl --sslPEMKeyFile mongodb.pem  
 MongoDB shell version: 2.6.3  
 connecting to: test  
 Server has startup warnings:   
 2015-09-27T16:46:41.647+0800 [initandlisten]   
 2015-09-27T16:46:41.647+0800 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.  
 2015-09-27T16:46:41.647+0800 [initandlisten] **    32 bit builds are limited to less than 2GB of data (or less with --journal).  
 2015-09-27T16:46:41.647+0800 [initandlisten] **    See http://dochub.mongodb.org/core/32bit  
 2015-09-27T16:46:41.647+0800 [initandlisten]   
 > show dbs  
 admin (empty)  
 local 0.078GB  
 >   

As you can read above, you need to specify parameter ssl and the pem file. That's it for this article, if you want to go the distance, try using tcpdump to listen to this port traffic. Good luck!