In order to restore service as quickly as possible, it is best if you have a server ready to replace instantly, that you can reduce the noise from the customers. But in order to prevent such attack coming in the future, you must at least identify how it happened and taking counter measurement.
In this article, we will learn how to discover, and then taking counter measure.
Quick solution.
Probably the quickest solution is to format and reinstall the operating system together with your applications that serve user requests. This probably is good if you do not have a backup server and you want to reinstate the server to serve user requests as soon as possible. But this does not solve the actual problem on how the hacked took place. Hence, it might happen again in the near future.
Long and workable solution.
- identify your own custom application deployed and start to investigate from there.
- update the system using package manager and restart system.
- tighten up security
identify your own custom application deployed and start to investigate from there.
Because open source are mostly tested well and updated often, the first place you are going to investigate mostly come from your own application. Hence, you must at least have good understanding about your app and so to quickly identify source of problem.
Following are a sets of commands which might able to help you in your investigation.
- w
who is on the server - sudo netstat -nalp | grep ":22"
change 22 to your application listen to. check if there is any abnormally. - if you are using opensource for your custom applications, check the log as well. For which attacker will always find the exploit for the opensource softwares and started to target those.
update the system using package manager and restart system.
First you can start by checking.
- last
check when was invalid last access. - cat /var/log/secure* | grep Accept
check invalid access. - ps -elf
check if the malware is running and if you spot one, get the process where it run from and delete all malware files. - ls /tmp /var/tmp /dev/shm -la
this directory normally allow process to write in, so you might want to check any fishy files here. - file <filename>
check what type of the file. - cat /etc/passwd
check if there is unknown entry which is not supposed to be there. - sudo netstat -plant |awk ' /^tcp/ {split($7, a, "/"); print $6, a[2]}' |sort | uniq -c | sort -n| tail
4 ESTABLISHED java
4 LISTEN kadmind
5 LISTEN java
5 LISTEN python
6 ESTABLISHED python
if your server has been turned into a trojan, the malware will probably launching a lot of ddos, with this command, you should be able to identify if the cp connection has been spike. - sudo netstat -plant | awk '$4 ~ /:22$/ {print $5}' | cut -f1 -d: | sort | uniq -c | sort -n
1
1 0.0.0.0
2 192.168.0.2
check total connection established to your server on port 22. - sudo netstat -plant | awk '/^tcp/ {print $6}' | sort | uniq -c | sort -n
2 CLOSING
4 SYN_RECV
5 LAST_ACK
6 FIN_WAIT1
12 LISTEN
13 FIN_WAIT2
344 TIME_WAIT
977 ESTABLISHED
check network states, this is a good information should your server suddenly spike in the state established or state syn. if there is any spike, you will know something maybe gone fishy. - $HOME/.bash_history
check every users bash_history to see if there is any suspect. If the server application run user a user id, especially check the bash_history in the user home directory. - find / -mtime 5
find what files has been changes since 5 days ago.
If there is nothing found, just update the system packages using package manager and reboot the system.
tighten up security and monitor
if you have a loose firewall policy (iptables or some hardware firewall), you should review it.
Prevention in the future would probably notify when the count of TCP connection exceed or suddenly spike to a threshold.
whilst these steps are not exhaustive, as evil people always come with different type attacks, thus you should be prepare and be alert. Gather information using google as well.