Sunday, October 12, 2014

Enable FSS in journald and verify using journalctl

Last we learned the basic of journalctl, today we will enable FSS in journald.

Forward Secure Sealing or FSS allows application to cryptographically "seal" the system logs in regular time intervals, so that if your machine is hacked the attacker cannot alter log history (but can still entirely delete it). It works by generating a key pair of "sealing key" and "verification key".

read more at https://eprint.iacr.org/2013/397

Okay, let's set it up. With this, we will use CentOS 7 for learning.

As root, let's setup the keys.
[root@centos7-test1 ~]# journalctl --setup-keys
/var/log/journal is not a directory, must be using persistent logging for FSS.

Hmm.. not possible because /run is mounted on tmpfs. We will now enable persistent storage for journald.

  1. as root, create directory # mkdir -p /var/log/journal 

  2. edit /etc/systemd/journald.conf and uncomment the following.

    1. Storage=persistence

    2. Seal=yes



  3. restart journald using command systemctl restart systemd-journald 

  4. Rerun command journalctl --setup-keys. See screenshot below.
    journald-fss

  5. Now we verify the log using command
    [root@centos7-test1 ~]# journalctl --verify
    PASS: /var/log/journal/e25a4e0b618f43879af033a74902d0af/system.journal



Looks good. Although I am not sure what is the verify-key as different verify key is used, it is always passed. Probably it will be fail if the logging is tampered.

Saturday, October 11, 2014

Learning IPv6

Recently I was fortunate enough to enable IPv6 on the router and all connected devices are now with IPv6 addresses. You will ask why would one want to switch to use IPv6?

Let's start simple, look at the graph at https://www.google.com/intl/en/ipv6/statistics.html , there is a trend growing in IPv6 adoption since mid yeer 2010. If that's not convincing enough to enable IPv6 in the router, then read on. I will explain based on the article found here.

First, what is IPv6?

Internet Protocol version 6 (IPv6) is the latest version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion.

Because IPv4 address is exhausted with the current addresses usage trends, more devices released soon will not get be able to get a unique public address from IPv4 pool.

Below is a summary in points form of the facts of IPv6.

  •  As of June 2014, the percentage of users reaching Google services with IPv6 surpassed 4% for the first time.

  • IPv6 uses a 128-bit address, allowing 2128, or approximately 3.4 × 1038 addresses. whilst IPv4 used 32-bit address and provide only 4.3 billion addresses.

  • IPv4 and IPv6 are not interoperable and thus adoption has been slow. To expedite the adoption, there are transition mechanisms have been devised to permit communication between IPv4 and IPv6 hosts.

  • IPv6 was first formally described in Internet standard document RFC 2460, published in December 1998.

  • IPv6 simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering and router announcements when changing network connectivity providers.

  • IPv6 simplifies processing of packets by routers by placing the need for packet fragmentation into the end points.

  • The standard size of a subnet in IPv6 is 264 addresses, the square of the size of the entire IPv4 address space.

  • IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result can be achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicast to address 224.0.0.1.

  • The IPv6 packet header has a fixed size (40 octets).

  • IPv4 limits packets to 65535 (216−1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4294967295 (232−1) octets.

  • In the Domain Name System, hostnames are mapped to IPv6 addresses by AAAA resource records, so-called quad-A records.


Ipv6_header

IPv6 addresses are represented as 8 groups of four hexadecimal digits separated by colons, for example 2001:0db8:85a3:0042:1000:8a2e:0370:7334, but methods of abbreviation of this full notation exist.  Each group is written as 4 hexadecimal digits and the groups are separated by colons (:). IPv6 unicast addresses other than those that start with binary 000 are logically divided into two parts: a 64-bit (sub-)network prefix, and a 64-bit interface identifier.

Ipv6_address_leading_zeros

For convenience, an IPv6 address may be abbreviated to shorter notations by application of the following rules, where possible.

  • One or more leading zeroes from any groups of hexadecimal digits are removed; this is usually done to either all or none of the leading zeroes. For example, the group 0042 is converted to 42.

  • Consecutive sections of zeroes are replaced with a double colon (::). The double colon may only be used once in an address, as multiple use would render the address indeterminate.


An example of application of these rules:

  • Initial address: 2001:0db8:0000:0000:0000:ff00:0042:8329

  • After removing all leading zeroes: 2001:db8:0:0:0:ff00:42:8329

  • After omitting consecutive sections of zeroes: 2001:db8::ff00:42:8329


The loopback address, 0000:0000:0000:0000:0000:0000:0000:0001, may be abbreviated to ::1 by using both rules.

Stateless Autoconfiguration

IPv6 lets any host generate its own IP address and check if it's unique in the scope where it will be used. IPv6 addresses consist of two parts. The leftmost 64 bits are the subnet prefix to which the host is connected, and the rightmost 64 bits are the identifier of the host's interface on the subnet. This means that the identifier need only be unique on the subnet to which the host is connected, which makes it much easier for the host to check for uniqueness on its own.

|Subnet Prefix 64 bits | Interface identifier 64 bits |

The mac address is used to derive the address for interface link local. I have written blog on how to do just that. please read here.

With the link-local derived, without the prefix fe80, and then use the remaining by concat with the network lan IPv6 prefix.

So an example of mac address 4c:33:22:11:aa:ee

Derived link local fe80::4e33:22ff:fe11:aaee

Public ip 2001:e68:5424:d2dd:4e33:22ff:fe11:aaee where 2001:e68:5424:d2dd is the network lan IPv6 prefix assigned by the router and 4e33:22ff:fe11:aaee is the local-link address without prefix fe80.

Dual IP stack implementation

Dual-stack (or native dual-stack) refers to side-by-side implementation of IPv4 and IPv6. That is, both protocols run on the same network infrastructure, and there's no need to encapsulate IPv6 inside IPv4 (using tunneling) or vice-versa. Dual-stack is defined in RFC 4213.

The dual-stack should only be considered as a transitional technique to facilitate the adoption and deployment of IPv6, as it has some major drawbacks and consequences: it will not only more than double the security threats from both IPv4 and IPv6 for the existing network infrastructure, but also ultimately overburden the global networking infrastructure with both dramatically increased Internet traffic. The ultimate objective is to deploy the single stack of IPv6 globally.

There are others which can be found in the wikipedia, http://en.wikipedia.org/wiki/IPv6 but the above should get you started. It works for me the first time I enable IPv6 and it works wonder after that.

Friday, October 10, 2014

Derive IPv6 link-local address for network interface

When you show the interface configuration using command ip, you will noticed there is a inet6 address start with fe80. Today, we will learn what is this and how this address is derive. Example below
user@localhost:~$ ip addr show wlan0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 4c:33:22:11:aa:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.133.50/24 brd 192.168.133.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 2001:e68:5424:d2dd:4e33:22ff:fe11:aaee/64 scope global dynamic
valid_lft 86399sec preferred_lft 14399sec
inet6 fe80::4e33:22ff:fe11:aaee/64 scope link
valid_lft forever preferred_lft forever

So first, what is Link-local address?

In a computer network, a link-local address is a network address that is valid only for communications within the network segment (link) or the broadcast domain that the host is connected to.

Link-local addresses are usually not guaranteed to be unique beyond a single network segment. Routers therefore do not forward packets with link-local addresses.

For protocols that have only link-local addresses, such as Ethernet, hardware addresses that the manufacturer delivers in network circuits are unique, consisting of a vendor identification and a serial identifier.

Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16, in CIDR notation. In IPv6, they are assigned with the fe80::/10 prefix.

So it is a wire address that is locally within a segment of a network and it is not routable beyond a router.

With this said, let's calculate link-local address.

1. take the mac address from ip command.
from above example 4c:33:22:11:aa:ee

2. add ff:fe in the middle of the current mac address.
4c:33:22:ff:fe:11:aa:ee

3. reformat to IPv6 notation by concatenate two hex groups into one.
4c33:22ff:fe11:aaee

4. convert the first octet from hexadecimal to binary
4c -> 01001100

5. invert the bit at position 6, starting from left with first bit as 0.
01001100 -> 01001110

6. convert the octet back in step 5 back to hexadecimal
01001110 -> 4e

7. replace first octet with newly calculated from step 6.
4e33:22ff:fe11:aaee

8. prepend the link-local prefix
fe80::4e33:22ff:fe11:aaee

That's it.

Sunday, September 28, 2014

Learning getting ssl traffic using wireshark and analyze ssl traffic.

Today we are going to study ssl using trace from wireshark. As such, there are few efforts we will need to do and summarize as below.

  1. setup a web server that has ssl certificate configured.

  2. get the network traffic using wireshark.

  3. decode and analyze the network traffic using wireshark.


So first, what is SSL?

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over the Internet.[1] They use X.509 certificates and hence asymmetric cryptography to authenticate the counterparty with whom they are communicating, and to exchange a symmetric key.

If you already have a web server with ssl certicate configured, you can skip step 1. This is the documentation which I used primarily. You may not succeed on the first attempt as it took me several attempts to get the ssl traffic decrypted.  Words of advice, just do not give up.

1. setup a web server that has ssl certificate configured.

With this, you can either get the certificate from an authority or you can generate one. If you do not know how, you can google or you can request in the comment, maybe in the future, I will write a simple one. But here, I assume you have the certicate ready.

In the web server, with apache httpd being the most common, edit the configuration file. In the apache directory, edit the ssl.conf. Example.
<apache httpd directory>/sites-available/default-ssl.conf

SSLCertificateFile /etc/apache2/sites-available/abc_cert.pem
SSLCertificateKeyFile /etc/apache2/sites-available/abc_private_key.pem

change to according where you place the certificate and its private key. Enable this site and restart apache httpd and then you are set. I won't go into details for troubleshoothing problem if you encounter as this is not the main focus of this article and should leave as an exercise.

2. get the network traffic using wireshark.

Make sure wireshark that is currently installed has GnuTLS compiled. You can check using command below. The output must have GnuTLS and Gcrypt available.
$ wireshark --version | grep GnuTLS
with GnuTLS 2.12.23, with Gcrypt 1.5.3, with MIT Kerberos, with GeoIP, with
1.6.1, with libz 1.2.8, GnuTLS 2.12.23, Gcrypt 1.5.4, without AirPcap.

Then now launch wireshark using root. Ignore about the warnings or information you receive during launch wireshark as root. Note, you can also using dumpcap when you need to capture in the server, but I have not verify if this solution is working. $ sudo dumpcap -i wlan0 -f 'host 192.168.133.49 and tcp port 443' . Probably not because you need to configure the server private key and the client (browser) random key. That should leave as another exercise.
$ sudo wireshark

There are some fields black out for obvious reason, we want to protect the server and client. But it should be self descriptive when you complete the steps as mentioned here.

We will first configure section in ssl configuration so that wireshark will be able to decrypt the data traffic. As such, you will need the server private key, which you can get from step 1 above. To configure that, go to Edit then Preferences... see screenshot below.

wireshark_edit_preference

A window from Wireshark: Preferences pop up. Now on the left menu, expand the Protocols in the tree and look for SSL.  See screenshot below.

wireshark_preference_window

First, we will configure RSA keys list. Click on the Edit... button. Then another window pop up. Now add the server key. There are four out of five fields you need to fill in. See screenshot below for final output. Here I will explain the fields.

wireshark_ssl_rsa_configuration























IP addressThe IP address of the SSL server in IPv4 or IPv6 format, or the following special values: any, anyipv4, anyipv6, 0.0.0.0. Put your server hostname or ip address if you know.
PortThe TCP port number, or the special value start_tls or 0. For web server, normally it run on port 443 and in this example, I gave port 443 because it is a remote server listening https traffic on port 443.
ProtocolA protocol name for the decrypted network data. Popular choices are http or data. If you enter an invalid protocol name an error message will show you the valid values. Because http data are encrypted using ssl, thus, we should put value http here.
Key Filepath to the RSA private key. So locate where you put the server private key at your local workstation and then select the file here.
Passwordonly needed when the private key is in format PCKS#12 (typically a file with a .pfx or .p12 extension). In step 1, the server private key is in format PEM and thus, for this field, you can leave it empty. Saved by clicking OK. Click on Apply and then OK.

The next field we are going to configure is the SSL debug file. This is a file written by this ssl module and I recommend you put a valid value here. You can tail this file later when the capture is started and you can inspect this file quickly (on the fly) when the decryption is happening. It is very good when your ssl decryption went wrong and this serve as a source of debug.

You should check the following fields.

  • Reassemble SSL records spanning multiple TCP segments

  • Reassemble SSL Application Data spanning multiple SSL records


Leave the field Message Authentication Code (MAC), ignore "mac failed and Pre-Shared-Key as is.

For the last field, (Pre)-Master-Secret log filename, fill in a value where in the next step, you will configure for the web browser environment. This is a file written by the client (web browser in our example) which is used by the client as a key to encrypt the data. Wireshark will read this file to decrypt the data.

That's it for the configuration, click on Apply button and OK button.

Now open another terminal and we will setup the environment so that client (browser) will dump the random key. Browser chromium will start to dump the keys to the file premaster.txt.
user@localhost:~$ export SSLKEYLOGFILE=/home/user/premaster.txt
user@localhost:~$ chromium

Now tail the ssl debug file and this premaster file in another two terminal tabs and watch the progress.

Right now, we will capture the traffic. To do that, click on Capture from the menu then Options... See the screenshot below. Set the configuration correctly, I check wlan0 because this is a laptop where the https request will flow to and fro within this channel. Capture filter, put on the host and the IP address of the web server where you configure in step 1 above. In this example, my server ip address is 192.168.133.49, so host 192.168.133.49.

wireshark_capture_option

To start the capture, click on the Start button.

Now, trigger a https call to the server from the web browser (in this example, chromium) and watch wireshark capture and decrypt the https data! Check also the tabs in terminal when debug log and premaster.txt are rolling. Click on stop button when you are satisfy with the https request.

3. decode and analyze the network traffic using wireshark.

From step 2 above, you are now have a complete ssl dumped and it is decrypted! See screenshot below. You may have noticed that SSL data has another tab at the bottom know as Decrypted SSL data. In this screenshot, it is 9000bytes. Pretty awesome I must say.

wireshark_decrypted_data

Right click on the row of packet which has protocol TLSv1 and click on Follow SSL Stream. It will show the encrypted ssl traffic (https) which has been decrypted into a http traffic.

That's it folks. I hope you learn something and please visit on donation page to donate to us.

Saturday, September 27, 2014

Study journalctl in CentOS 7

In CentOS 7, the new systemd has a new journaling app, known as journalctl. Today, we will study journalctl. First, what is journalctl?

journalctl is a client app to query the systemd journal. Systemd journal is written by systemd-journald.service.

Let's sudo into root and we will study journalctl via examples.
[user@localhost ~]$ sudo su -
Last login: Sat Sep 13 11:57:55 CEST 2014 on pts/0
[user@localhost ~]# journalctl
-- Logs begin at Mon 2014-09-01 14:57:19 CEST, end at Mon 2014-09-15 10:52:52 CEST. --
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost systemd-journal[146]: Runtime journal is using 8.0M (max 2.3G, leaving 3.5G of free 23.4G, current limit 2.3G).
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuset
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpu
Sep 01 14:57:19 localhost kernel: Initializing cgroup subsys cpuacct
Sep 01 14:57:19 localhost kernel: Linux version 3.10.0-123.6.3.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GC
Sep 01 14:57:19 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-123.6.3.el7.x86_64 root=UUID=bbbbbbbb-7777-465a-993a-888888888888 ro nomodeset rd.a
Sep 01 14:57:19 localhost kernel: e820: BIOS-provided physical RAM map:
...
...
...
Sep 15 10:57:00 foo.example.com sshd[23533]: Received disconnect from 123.123.123.123: 11: disconnected by user
Sep 15 10:57:00 foo.example.com systemd-logind[1161]: Removed session 9773.
Sep 15 10:59:04 foo.example.com sshd[23813]: Accepted publickey for foobar from 132.132.132.132 port 36843 ssh2: RSA 68:68:68:68:68:86:68:68:68:68:68:68:0
Sep 15 10:59:04 foo.example.com systemd[1]: Created slice user-1005.slice.
Sep 15 10:59:04 foo.example.com systemd[1]: Starting Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd-logind[1161]: New session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com systemd[1]: Started Session 9774 of user foobar.
Sep 15 10:59:04 foo.example.com sshd[23813]: pam_unix(sshd:session): session opened for user foobar by (uid=0)
lines 53881-53917/53917 (END)

As you may noticed, journalctl show all the logging since the system was booted until at this moment. So there are a lot of lines and data to be interpreted. So you might want to look into the parameters accepted for this application.

If you want to show most recent log, give -r. This will reverse the ordering by showing newest entries first. If you want to show newest ten lines, give -n as a parameter. Example journalctl -r -n 10

To show how much all these log take the disk space, give --disk-usage. Note that journal logs are stored in the directory /run/log/journal and not /var/log.

If you want to show only log from a unit(service), give --unit. Example journalctl --unit=sshd will show logging for sshd only. Very neat!

Sometime you just want to monitor a certain range of date and/or time. You can append parameter --since and --until. Example journalctl --since="2014-09-14 01:00:00" --until="2014-09-14 02:00:00" it will show all journal within that duration of 1hour. I think this is really good for system monitoring, system support or even during finding trace of compromised system.

If you want the journal logs to appear in web interface, you can format the logging to a format the web application supported. As of this time of writing, journalctl supported the following format.











































shortis the default and generates an output that is mostly identical to the formatting of classic syslog files, showing one line per journal entry.
short-isois very similar, but shows ISO 8601 wallclock timestamps.
short-preciseis very similar, but shows timestamps with full microsecond precision.
short-monotonicis very similar, but shows monotonic timestamps instead of wallclock timestamps.
verboseshows the full-structured entry items with all fields.
exportserializes the journal into a binary (but mostly text-based) stream suitable for backups and network transfer (see Journal Export Format[1] for more information).
jsonformats entries as JSON data structures, one per line (see Journal JSON Format[2] for more information).
json-prettyformats entries as JSON data structures, but formats them in multiple lines in order to make them more readable for humans.
json-sseformats entries as JSON data structures, but wraps them in a format suitable for Server-Sent Events[3].
catgenerates a very terse output only showing the actual message of each journal entry with no meta data, not even a timestamp.

json would probably comes in mind to display the logging on web interface.

There is also a feature known as Foward Secure Sealing where the log will be encrypted using a sealing key and the log can be verified using a verification key. You can check on parameter such as, --setup-keys --interval --verify --verify-key. We won't cover FFS in this article, perhaps sometime in the future, I will devote an article on how to set this up.

There are also many other good option that help you analyze the log using different strategy like -b, -p and logical operator but that this article should be able to give you a head start. You can find more information through journalctl manual.

Friday, September 26, 2014

transition from sysV to systemd, from chkconfig to systemctl

If you have just been installed CentOS 7.0 and as usual, command chkconfig is executed
to list what processes will be start on boot. As seen below:
[root@localhost ~]# chkconfig

Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.

If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.

iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
tomcat 0:off 1:off 2:off 3:off 4:off 5:off 6:off

That's odd, something has changed. For your information, sysV has been replaced in favor of systemd and today we are going to learn what is systemd is. So what is systemd ?

systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit. 

That is a very lengthy definition. If you are still not so sure, perhaps take a moment to watch a video here.



Because there are a lot of documentations in the google to explain what is systemd in details, but this article will target busy people who need the solution right now. As such, if you want more details solutions, you should google or read a few helpful links below.

So why replace sysV with systemd? What have been improved?

Lennart Poettering and Kay Sievers, the software engineers who initially developed systemd,[1] sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies; to allow more processing to be done concurrently or in parallel during system booting; and to reduce the computational overhead of the shell.

Systemd's initialization instructions for each daemon are recorded in a declarative configuration file rather than a shell script. For inter-process communication, systemd makes Unix domain sockets and D-Bus available to the running daemons. Systemd is also capable of aggressive parallelization.

There are several tools to manage systemd.

  • systemctl:
    used to introspect and control the state of the systemd system and service manager

  • systemd-cgls:
    recursively shows the contents of the selected Linux control group hierarchy in a tree

  • systemadm:
    a graphical frontend for the systemd system and service manager that allows introspection and control of systemd. Part of the systemd-gtk package. This is an early version and needs more work. Do not use it for now unless you are a developer.


Below are a table to summarize what you usually done in chkconfig and in systemd, what command you can use as a replacement.











































































Sysvinit CommandSystemd CommandNotes
service frobozz startsystemctl start frobozz.serviceUsed to start a service (not reboot persistent)
service frobozz stopsystemctl stop frobozz.serviceUsed to stop a service (not reboot persistent)
service frobozz restartsystemctl restart frobozz.serviceUsed to stop and then start a service
service frobozz reloadsystemctl reload frobozz.serviceWhen supported, reloads the config file without interrupting pending operations.
service frobozz condrestartsystemctl condrestart frobozz.serviceRestarts if the service is already running.
service frobozz statussystemctl status frobozz.serviceTells whether a service is currently running.
ls /etc/rc.d/init.d/systemctl list-unit-files --type=service (preferred)
ls /lib/systemd/system/*.service /etc/systemd/system/*.service
Used to list the services that can be started or stopped Used to list all the services and other units
chkconfig frobozz onsystemctl enable frobozz.serviceTurn the service on, for start at next boot, or other trigger.
chkconfig frobozz offsystemctl disable frobozz.serviceTurn the service off for the next reboot, or any other trigger.
chkconfig frobozzsystemctl is-enabled frobozz.serviceUsed to check whether a service is configured to start or not in the current environment.
chkconfig --listsystemctl list-unit-files --type=service(preferred)
ls /etc/systemd/system/*.wants/
Print a table of services that lists which runlevels each is configured on or off
chkconfig frobozz --listls /etc/systemd/system/*.wants/frobozz.serviceUsed to list what levels this service is configured on or off
chkconfig frobozz --addsystemctl daemon-reloadUsed when you create a new service file or modify any configuration

Runlevels/targets

Systemd has a concept of targets which serve a similar purpose as runlevels but act a little different. Each target is named instead of numbered and is intended to serve a specific purpose.













































Sysvinit RunlevelSystemd TargetNotes
0runlevel0.target, poweroff.targetHalt the system.
1, s, singlerunlevel1.target, rescue.targetSingle user mode.
2, 4runlevel2.target, runlevel4.target, multi-user.targetUser-defined/Site-specific runlevels. By default, identical to 3.
3runlevel3.target, multi-user.targetMulti-user, non-graphical. Users can usually login via multiple consoles or via the network.
5runlevel5.target, graphical.targetMulti-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.
6runlevel6.target, reboot.targetReboot
emergencyemergency.targetEmergency shell

Below are a summarize the command you will (hopefully) use.

  • systemctl isolate multi-user.target
    To change the target/runlevel, to switch to runlevel 3

  • systemctl set-default <name of target>.target
    graphical.target is the default. You might want multi-user.target for the equivalent of non graphical (runlevel 3) from sysv init.

  • systemctl get-default
    to show the currentl target/runlevel


Note, there are several changes you should keep in mind.
* systemd does not use /etc/inittab file.
* change number of gettys in /etc/systemd/logind.conf
* unit files are now store in /usr/lib/systemd/system/

That's it, I hope you get a basic understanding and will be able to start using systemd.

Sunday, September 14, 2014

How to convert java keystore to format apache httpd understand

If you received a java keystore file from a Certificate Authority and want to use this cert to setup in apache httpd ssl, you will meet failure, at least I did. So today, I will share my finding on how to convert java keystore file into PEM format which is understand by apache httpd.

So how do you know if a certificate signed by CA is of type java keystore? Simple, just check the content using keytool. Keytool is an app come together when you install java environment.
$ keytool -list -keystore abc.jks
Enter keystore password:

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

ABC_Certificate, Aug 19, 2013, PrivateKeyEntry,
Certificate fingerprint (MD5): 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00

As you can read above, this is a valid java keystore file and we will now convert to a intermittent format, pkcs12 first. We will use keytool again to do the conversion.
$ keytool -importkeystore -srckeystore abc.jks -destkeystore abc.p12 -srcalias ABC_Certificate -srcstoretype jks -deststoretype pkcs12
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
$

the output abc.p12 is the certificate in pkcs12 and now we are ready to convert to pem format. We will use openssl to do this conversion.
$ openssl pkcs12 -in myapp.p12 -out myapp.pem
Enter Import Password:
MAC verified OK
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
$

You can basically use myapp.pem for the field in SSLCertificateFile and SSLCertificateKeyFile but unfortunately when apache httpd is restarted, it will ask for the private key passphrase. With the following steps, we will remove the passphrase from the private key.

Removed passphrase so when apache httpd instance is restarted, it will not ask for password.
$ openssl rsa -in abc.pem -out abc_private_key.pem
Enter pass phrase for abc.pem:
writing RSA key
$ openssl x509 -in abc.pem >>abc_cert.pem

As you noticed, right now you end up with the certificate private key and the certificate. Now move these two files, abc_private_key.pem and abc_cert.pem to a directory in the apache httpd server and change the ssl configuration in apache httpd.
SSLCertificateFile    /path/to/the/directory/contain/abc_cert.pem
SSLCertificateKeyFile /path/to/the/directory/contain/abc_private_key.pem

That's it, I hope it works for you too.