Java: Importing a .cer certificate into a java keystore

First let’s have a short look at what those certificates are and what you need them for. A certificate is basically a public key together with some additional identification information (e.g. country, location, company…). The certificate is signed by a Certificate Authority (CA) which guarantees that the information attached to the certificate are true. The .cer files are files containing a certificate.

Additionally to the certificate, you also need a private key. The receiver of the certificate will use the public key in the certificate to decipher the encrypted text sent you are sending. You will encrypt the text using the corresponding private key. The public key in the certificate is publicly available. But you are the only one having access to the private key (that’s why the keystore containing your private key is protected by a password). This allows everybody to check whether sent information really comes from you.

While developing your software you will most probably be working with self-generated certificates. These certificates do not allow the client application to check whether you are really who you say are but they allow you to test most certificate related functionality. You can generate such a certificate like this:

$JAVA_HOME/bin/keytool -genkey -alias ws_client -keyalg RSA -keysize 2048 -keypass YOUR_KEY_PASSWORD \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -dname "cn=YOUR_FQDN_OR_IP, ou=YOUR_ORG_UNIT, o=YOUR_COMPANY, c=DE" \
         -validity 3650 -J-Xmx256m

Note that the backslashes you see in there are only required so that this command is recognized as a multiline command. If you write it all on one line, you won’t need them.

The certificate generated above is valid for almost 10 years (3650 days).

The -J parameter is just in there so that you do not get such an error invoking keytool:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

Now, when you go into production, you’ll want to have a “real” certificate so that your users do not get more or less scary messages saying that your identity cannot be verified (i.e. has not been created by a trusted certificate authority). You’ll have to buy such a certificate or have your customer generate one if they can.

This is how you can display the certificates currently installed in your keystore:

$JAVA_HOME/bin/keytool -list \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

It will return something like:

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

ws_client, Apr 9, 2014, PrivateKeyEntry,

Certificate fingerprint (MD5): 4A:B5:07:64:A3:FF:16:E4:B9:28:A3:D9:BE:9D:7D:E6

You can export this certificate like this:

$JAVA_HOME/bin/keytool -exportcert -rfc -alias ws_client -file CER_FILE_PATH \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

The rfc option means that the certificate will not be exported in binary form but as shown below.

The exported file looks like this:


In order to do get a certificate, you’ll have to provide the certifying authorities of the customer with a certificate request. This can be done using the keytool command like this:

$JAVA_HOME/bin/keytool -certreq -alias ws_client -file CSR_FILE_PATH -keypass YOUR_KEY_PASSWORD \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m

The certificate request file looks like this:


This certificate request file can then be sent to the person providing the certificate. Using this certificate request, he/she will generate a certificate which can then be imported this way:

$JAVA_HOME/bin/keytool -importcert -alias ws_client -file CER_FILE_PATH \
         -keystore PATH_TO_KEYSTORE/ws_client.keystore \
         -storepass YOUR_KEYSTORE_PASSWORD -J-Xmx256m 

You will need to answer y when prompted whether you trust this certificate:

Owner:, OU=Blog, O=amazingweb GmbH, C=DE
Issuer:, OU=HenriCA, O=amazingweb GmbH, C=DE
Serial number: 534565a5
Valid from: Wed Apr 09 17:22:13 CEST 2014 until: Sat Apr 06 17:22:13 CEST 2024
Certificate fingerprints:
         MD5:  4A:B5:07:64:A3:FF:16:E4:B9:28:A3:D9:BE:9D:7D:E6
         SHA1: 69:C5:C9:9D:08:AE:17:37:2E:58:F6:77:C9:7B:59:59:E3:29:49:74
         Signature algorithm name: SHA1withRSA
         Version: 3
Trust this certificate? [no]:  yes
Certificate was added to keystore

Note that whether you use a self-generated certificate or one generated by a trusted CA, you will need to reference the keystore file and provide the keystore password in the configuration of your servlet container or application server (e.g. in jbossweb.sar/server.xml for JBoss).

ESXi: RestrictedVersionFault

When using ESXi free licenses, you might get an error message referencing a fault called RestrictedVersionFault e.g.:

SOAP Fault:


Fault string: Current license or ESXi Version prohibits execution of the requested operation.

Fault detail: RestrictedVersionFault


Failed : Current license or ESXi version prohibits execution of the requested operation.

This happens when you start, stop or suspend a VM using the command line interface ( vCLI or PowerCLI) or the SOAP interface (using VI for Java or directly the vSphere API). But it might also happen when configuring SNMP, performing scheduled backups or putting an ESXi host in maintenance mode.

This is caused by a restriction based on the VMWare license you are using. The free ESXi license (also known as the vSphere Hypervisor license) only allows read-only access to the vSphere API. This not only affects you when using the vSphere API directly but also when using any of the vSphere toolkits or management tools.

You’ll notice that when using the vSphere Client to manage your ESXi hosts, you will not face this problem. Using the vSphere Client, you have access to this functionality without read-only restriction. But as soon as you try to automate this, you will be affected.

So even though the fault message tells you it’s a problem with the version you’re using, it is not the case. It is only a problem with the license type you’re using. So instead of using a vSphere Hypervisor license, you should invest a few dollars and upgrade to a vSphere Essentials Kit or the vSphere Standard, Enterprise or Enterprise Plus editions.

If you were using the free edition waiting for some other licenses to be received, you should rather use the trial license (evaluation license) instead of the free license. This will provide access to the feature set of vSphere Enterprise Plus until the end of the 60-days evaluation period.


Code Coverage: A misleading metric

Code coverage is the measure of how much of your code is covered while running tests. You could of course also consider manual integration tests when measuring it but usually it means automated unit test.

So this is actually a pretty useful metric. I’ve worked in projects with close to 100% code coverage and projects close to 0%. Writing code in a project with high code coverage is great because you have a safety net when changing the code. This doesn’t mean you should analyze the impact of your changes but it gives more confidence to perform required code changes.

Test coverage is a useful for finding untested code. Analyzing coverage helps you find which pieces of code that aren’t being tested. And since every piece of code not covered by tests is a potential source of errors which go undetected and it is well known that the earlier a bug is found the cheaper it is to fix it, having 100% coverage really does sound great !

In a new project without tons of legacy see code I do expect to have a high code coverage. The problem with code coverage comes from the fact that your management may not only expect high coverage but actually require a certain level of code coverage. And whenever a certain level of coverage becomes a target, developers try to reach it. The thing is that it encourages them to bend over backwards by writing poor tests just to meet the goal. It leads to writing tests for their own sake.

It is actually easy to write tests that cover code without actually checking its correctness. Just write unit tests covering the code but not containing any asserts. Obviously, code coverage does not tell you what code was tested but only what code was run. So high coverage numbers don’t necessarily mean much. High code coverage does tell you that you have a lot of tests. But it doesn’t really tell you how good your code is. Your code can be full of bugs and still you could have 100% test coverage. Pure coverage figures don’t tell you how important this code is, how complex it is, nor what’s the quality of the tests. And a high code coverage can only lead to good code if the tests you run are good. With good tests, high coverage can only be achieved with error free code. But with poor tests even crappy code can make it to 100% coverage.

So having 100% test coverage (or anything close to it) gives a false sense of confidence and of the robustness of a project. That’s why you should not make 100% test coverage the focus. Mandating a minimum code coverage under which your coverage is so bad that your automated tests are really helpful is probably fine but defining a high target test coverage threshold is useless.

Another issue with a forced march to high code coverage is that writing many tests for checking a code base which is poorly designed will likely increase the cost of changing the design later on. If you experience that simple changes to code cause disproportionally long changes to the tests i.e. if you need to fix numerous tests for each change you make in the code, it’s either a sign that there’s a problem with the tests or that your whole design is shaky.

In such cases, more having tests is not always better. Having better tests would in fact be better. You need to keep in mind that every test you’re creating is a test that will eventually have to be maintained. You need to not only write more tests to increase code coverage but evaluate whether the maintenance cost related to these additional tests justifies writing them and whether the additional code coverage reached really leads to more quality. Too many tests can slow you down. Not because the tests take time to run. You can always move slow tests to a later stage or only run them periodically. But excess tests only written to satisfy a code coverage target will drive up the cost of changes both in money and time.

On the other hand,¬† even though low coverage does not automatically mean your code is buggy, low coverage numbers (e.g. way below 50%) are a sign of trouble. They are not a proof but a smell of bad quality. Especially, a combination of low coverage and high complexity is not a good sign. Also, a loss of code coverage over time is a sign that code modification are not properly reflected in the tests. But we should keep in mind that low automated test coverage does not imply that the software is untested. There are other ways to test your software. Only automated tests is never enough. Product testers not only test the code on the basis of formulated requirements but also test the product looking for requirements might not have been explicitly formulated (it more often the case that you have not identified all your functionality than you think), usability issues…

Many people focus on code coverage numbers because they expect to derive from it whether they are testing enough. But in many cases, you should not worry about code coverage but about writing good tests. Code coverage is great, but your end product will be measured by its functionality and its reliability, so functionality coverage is even better. Anyway unit tests should be meant to test functionality. They are low level tests but still they should test from the perspective of the required functionality. Actually if your code and functionality are not equivalent to a large extent, then you have bigger problems than what level of code coverage you’ve reached.So the goal should not be to have 100% code coverage, but to have unit tests testing the required functionality as completely and extensively as possible.

Despite all the points above against requiring a 100% (or a very high) code coverage, the code coverage metric can actually be useful when used properly. The wide-spread negativity towards theses metrics is often due to their misuse. 100% code coverage is meaningless without implementing other habits and practices which ensure code and tests quality. So a high code coverage is anyway just the beginning, it is a good starting point to covering the actual functionality but not a goal in itself.

Actually, you should see code coverage is a byproduct of well designed and written tests, not a metric that indicates the tests are well designed or well written. On the other hand, good code with a clear design is not only easier to read and less buggy but also easier to cover. So both good quality in code and tests will lead to a higher coverage. But requiring a high code coverage will not lead to more quality in code and tests.

A way to improve the expressiveness of code coverage is to combine its measurements with other measurements like complexity measurements, correlate it with information about the importance of certain parts of the code, incorporate information about bugs reported after release…


Working with Enums in C#

Just wanted to share a helper class I’m using in some projects to make it easier to work with enums.

    public static class EnumHelpers
        private const string MustBeAnEnumeratedType = @"T must be an enumerated type";

        public static T ToEnum<T>(this string s) where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            T @enum;
            Enum.TryParse(s, out @enum);
            return @enum;

        public static T ToEnum<T>(this int i) where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            return (T) Enum.ToObject(typeof (T), i);

        public static T[] ToEnum<T>(this int[] value) where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            var result = new T[value.Length];
            for (int i = 0; i < value.Length; i++)
                result[i] = value[i].ToEnum<T>();
            return result;

        public static T[] ToEnum<T>(this string[] value) where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            var result = new T[value.Length];
            for (int i = 0; i < value.Length; i++)
                result[i] = value[i].ToEnum<T>();
            return result;

        public static IEnumerable<T> ToEnumFlags<T>(this int i) where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
                (from flagIterator in Enum.GetValues(typeof (T)).Cast<int>()
                    where (i & flagIterator) != 0
                    select ToEnum<T>(flagIterator));

        public static bool CheckFlag<T>(this Enum value, T flag) where T : struct, IConvertible
            if (!typeof(T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            return (Convert.ToInt32(value, CultureInfo.InvariantCulture) & Convert.ToInt32(flag, CultureInfo.InvariantCulture)) != 0;

        public static IDictionary<string, T> EnumToDictionary<T>() where T : struct, IConvertible
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            IDictionary<string, T> list = new Dictionary<string, T>();
            Enum.GetNames(typeof (T)).ToList().ForEach(name => list.Add(name, name.ToEnum<T>()));
            return list;

The first ToEnum extension allows you to write something like this:


It adds a ToEnum method to the string class in order to convert a string to an enum value.

The second ToEnum extension does the same but with ints e.g.:


The third ToEnum extension is similar but extends int arrays to return a collection of enum values e.g.:

new[] {1, 0}.ToEnum<MyEnum>()

The forth ToEnum extension does the same but with string arrays e.g.:

new[] {"Value2", "Value1"}.ToEnum<MyEnum>()

The ToEnumFlags extension returns a list of enum values when working with flags e.g.:

13.ToEnumFlags<MyEnum2>().Select(e => e.ToString()).Aggregate((e, f) => (e + " " + f))

The EnumToDictionary method returns a dictionary with the enum item name as key and values e.g.:


The CheckFlag extension converts checks whether an enum flag is set e.g.:

(MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag4)

To test this you can use the following:

    internal enum MyEnum

    internal enum MyEnum2
        Flag1 = 1,
        Flag2 = 2,
        Flag3 = 4,
        Flag4 = 8,

    public static class MainClass
        public static void Main()
            Console.WriteLine("1) {0}", (int) "Value1".ToEnum<MyEnum>());
            Console.WriteLine("2) {0}", 0.ToEnum<MyEnum>());
            Console.WriteLine("3) {0}", new[] {1, 0}.ToEnum<MyEnum>());
            Console.WriteLine("4) {0}", new[] {"Value2", "Value1"}.ToEnum<MyEnum>());
            Console.WriteLine("5) {0}", 13.ToEnumFlags<MyEnum2>().Select(e => e.ToString()).Aggregate((e, f) => (e + " " + f)));
            Console.WriteLine("6) {0}", (int) EnumHelpers.EnumToDictionary<MyEnum>()["Value2"]);    
            Console.WriteLine("7) {0}", (MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag4));
            Console.WriteLine("8) {0}", (MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag3));

which will output:

1) 0
2) Value1
3) Playground.MyEnum[]
4) Playground.MyEnum[]
5) Flag1 Flag3 Flag4
6) 1
7) True
8) False

I guess most methods are pretty basic but sometimes it’s easier to just have a helper class and not have to think about how to solve the problem again and again (even though it’s not a very complex problem)…

Three options to dynamically execute C# code

I’ve been working in the past few months on a project where users can write C# to query data from a model. This means that I needed to dynamically execute C# code entered by the user.

There are basically three options for compiling and executing C# code dynamically:

  • Using the CodeDOM compiler
  • Using the Mono Compiler Service
  • Using Roslyn

Using the CodeDOM (Code Document Object Model), you can dynamically compile source code into a binary assembly. The steps involved are:

  1. Create an instance of the CSharpCodeProvider
  2. Compile an assembly from source code
  3. Get the type from the assembly
  4. Instantiate this type
  5. Get a reference to the method
  6. Call the method with the appropriate parameters

Here’s an example how to implement steps 1 and 2:

        private static Assembly CompileSourceCodeDom(string sourceCode)
            CodeDomProvider cpd = new CSharpCodeProvider();
            var cp = new CompilerParameters();
            cp.GenerateExecutable = false;
            CompilerResults cr = cpd.CompileAssemblyFromSource(cp, sourceCode);

            return cr.CompiledAssembly;

The other steps can then be implemented like this:

        private static void ExecuteFromAssembly(Assembly assembly)
            Type fooType = assembly.GetType("Foo");
            MethodInfo printMethod = fooType.GetMethod("Print");
            object foo = assembly.CreateInstance("Foo");
            printMethod.Invoke(foo, BindingFlags.InvokeMethod, null, null, CultureInfo.CurrentCulture);

Since you can only compile full assemblies with the CodeDOM, this means that:

  • You cannot execute snippets of code, you need a full class definition
  • Every time you compile some code a new assembly will be created and loaded

Using the Mono Compiler Service, you can also execute code dynamically. The big difference here (except the fact that you need a few other assemblies) is that MCS allows you to evaluate code with needing to define a class with methods so instead of this:

class Foo
    public void Print()
        System.Console.WriteLine(""Hello !"");

You could just evaluate the following:

System.Console.WriteLine(""Hello !"");

This is of course very useful when dynamically executing pieces of code provided by a user.

Here is a minimal implementation using MCS:

private static void ExecuteMono(string fooSource)
    new Evaluator(new CompilerContext(new CompilerSettings(), new ConsoleReportPrinter())).Run(fooSource);

If you need a return value, you can use the Evaluate method instead of the Run method.

One other advantage of MCS is that it doesn’t write an assembly to disk and then loads it. So when you evaluate 1000 expressions, you don’t end up loading 1000 new assemblies.

The last and latest option to handle dynamic C# code execution is to use the .NET Compiler Platform Roslyn. Roslyn is a set of open-source compilers and code analysis APIs for C# and Visual Basic.NET.

Dynamically executing code with Roslyn is actually pretty similar to the way you would do it with the CodeDOM. You have to create an assembly, get an instance of the class and execute the method using reflection.

Here is an example how to create the assembly:

        private static Assembly CompileSourceRoslyn(string fooSource)
            using (var ms = new MemoryStream())
                string assemblyFileName = "gen" + Guid.NewGuid().ToString().Replace("-", "") + ".dll";

                CSharpCompilation compilation = CSharpCompilation.Create(assemblyFileName,
                    new[] {CSharpSyntaxTree.ParseText(fooSource)},
                        new MetadataFileReference(typeof (object).Assembly.Location)
                    new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary)

                Assembly assembly = Assembly.Load(ms.GetBuffer());
                return assembly;

Note that I do not handle any error scenarios here (or in the other pieces of code in this post) but to use it in a production system, you’d of course need to do it.

As with CodeDOM, a new assembly is created every time code is dynamically compiled. The difference is that you need to create a temporary assembly name yourself.

Executing the code from the assembly is then the same as with the CodeDOM.

Now let’s look at it from a performance perspective. Running the dynamic execution 100 times in a row for using the three methods, I got the following results:

roslyn: assemblies loaded=114
roslyn: time=2049,2049 milliseconds

codedom: assemblies loaded=100
codedom: 7512,7512 milliseconds

mono: assemblies loaded=6
mono: time=15163,5162 milliseconds

Running it multiple times has shown that the numbers are pretty stable.

So you can see that with Roslyn and CodeDOM, the number of assemblies loaded is increasing with the number of runs. The 14 more assemblies with Roslyn as well as the 6 assemblies with MCS are due to the fact that the assembly required by the compilers themselves are not loaded when the program is started but when they are used.

From a number of loaded assemblies point of view, MCS is a clear winner. It doesn’t need to load any assemblies except the assemblies required by the compiler.

From a performance perspective, Roslyn is quite impressive being more than 3 times faster than the CodeDOM and more than 7 times faster than MCS.

The main disadvantage of Roslyn versus CodeDOM is that Roslyn is not yet production ready and is pretty much work in progress. Once Roslyn is more feature complete and production ready, there will be no reasons to further use CodeDOM for dynamic code execution.

Although it’s the slowest solution, MCS is the one which scales better so if you have to execute a non-finite number of dynamic code, it’s the way to go. Also if you are just executing snippets of code and not whole classes, it makes your life easier.


Microsoft Word 2007: Bullets and numbering in unprotected sections

Let’s assume you have a template where you only want users to be able to write to a few sections of the document but not in the rest e.g. because the rest is automatically generated, you would protect the document and make these 2 sections unprotected. In Word 2003, it is possible to write text in there and format things however you want including numbered or bulleted lists.

Now if you open a document based on this template in Word 2007, you’ll still be able to write text in the unprotected document but when you select text and right click to open the context menu, you’ll see that bullets and numbering are dimmed (disabled). Also the buttons for bullets and numbering in the toolbar are disabled.

What’s strange (apart from the fact that it worked fine in Word 2003 but not in Word 2007) is that writing a start and space (or a 1 and space) will automatically create a bulleted list. So it’s not just that bullets and numbering doesn’t work but just that it’s disabled in the context menu and toolbars.

The only solution for this is to defined quick styles¬† (the ones you see in the Styles group of the Home tab) with the appropriate formatting and use them to apply this formatting to the text in the unprotected section. So just define all the styles you want to use in a protected document as quick styles and the problem is solved ! This doesn’t seem to make much sense but it works. It looks like Word 2007 feels that applying existing styles in an unprotected section of a protected document is fine but using the formatting options in the context menu or toolbar is actually modifying styles.

Also if all you need are bullets, you can also use the shortcut for the bulleted list: Ctrl+Shift+L.

WordPress: 2D and 3D taxonomy cloud

Some time ago, I published a WordPress plugin called WP Category Tag Cloud.

With this plugin, you can display a configurable 2D or 3D cloud of taxonomy terms (like tags, categories or any custom taxonomy) and filter them by tags or categories.

My first use case was to display a list of tags and only show tags used in posts belonging to a give category. For example, if you have two areas in your web site (like a business web page and a blog), and you separate these two by using different categories but the tags can partly overlap, you can use this plugin to define a widget which will only show tags used in posts of the blog category.

And over time, there were a few other use cases which came from web sites I am hosting or from user requests. So this plugin provides many options to define how the cloud should be rendered and what should be contained in the cloud.

The cloud elements can be displayed in many different ways:

as a flat list separated by spaces: [showtagcloud taxonomy=category format=flat number=10 order_by=count order=DESC]

as a bulletted list: [showtagcloud taxonomy=category format=list number=10 order_by=count order=DESC]

as price tags: [showtagcloud taxonomy=category format=price number=10 order_by=count order=DESC]

as rectangular tags with rounded corners: [showtagcloud taxonomy=category format=rounded number=10 order_by=count order=DESC]

as horizontal bars: [showtagcloud taxonomy=category format=bars number=10 order_by=count order=DESC]

as a 3D HTML5 based tag cloud: [showtagcloud taxonomy=category format=array number=10 order_by=count order=DESC]

You can also see this one in action in the side bar of this blog.


Additionally, you can set many options to tweak the way the cloud is displayed:

randomly colorize terms: [showtagcloud taxonomy=category format=flat number=10 order_by=count order=DESC colorize=1]

make less used terms less opaque: [showtagcloud taxonomy=category format=flat number=10 order_by=count order=DESC opacity=1]

randomly tilt terms: [showtagcloud taxonomy=category format=flat number=10 order_by=count order=DESC tilt=1]

all of the above: [showtagcloud taxonomy=category format=flat number=10 order_by=count order=DESC tilt=1 opacity=1 colorize=1]


There are even more options available to make this widget look like the way you want it to. And new options are added in every new version.

In order to make sure the performance is good, you can have the widgets cached and define how long they should be cached. Of course, if you change any setting of a widget , it will be regenerated even if it was already in cache.

The widget can be displayed in the sidebar using a widget, in any post or page using a shortcode or anywhere else using the provided PHP function. For more details, please check the FAQ of the plugin.

There are still a few other features that I have on my list but I feel this plugin has already evolved over the past few weeks into one of the more flexible and powerful term cloud plugins for WordPress.

To give it a try, go to the plugin page on and download the plugin. The installation is as easy as searching for it in the plugin installation age in your WordPress installation (or uploading the folder wp-category-tag-could to the /wp-content/plugins/), activating the plugin through the ‘Plugins’ menu in WordPress and selecting the widget and configure it or using the PHP function or the short code.

If you try this plugin please do not forget to leave a review here. And if you find issues or have ideas how to improve the plugin, use the support feature on

Windows: Network connections timing out too quickly on temporary connectivity loss

If you have a rather unstable network where you tend to loose connectivity for a short time frequently, you might notice that established connections (e.g. ssh connections using putty) will get lost. You can then immediately reconnect but it’s still a pain.

The issue is not really with the software loosing the connection (e.g. putty) but rather with the Windows network configuration. A single application cannot set the network settings for the whole application or a specific session to prevent this problem. To solve this problem, you will need to tweak a few Windows network settings.

Basically tweaking these settings means increasing the TCP timeout in Windows. This can be done in the registry.

The relevant TCP/IP settings are:

  • KeepAliveTime
  • KeepAliveInterval
  • TcpMaxDataRetransmissions

These parameters are all located at the following registry location: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Tcpip\Parameters.

On Windows versions which are not based on Windows NT (i.e. Windows 95, Windows 98 and Windows ME), these parameters are located under: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\MSTCP.


The KeepAliveTime parameters controls how long the TCP driver waits until the a keep-alive packet is sent over an idle TCP connection. A TCP keep-alive packet is simply an ACK packet sent over the connection with the sequence number set to one less than the current sequence number for the connection. When the other end receives this packet, it will send an ACK as a response with the current sequence number. These communication is used to make sure that the remote host at the other end of the connection is still available and make sure the connection is kept open.

Since TCP keep-alives are disabled by default, the application opening the connection needs to specifically enable them.

The value is the number of milliseconds of inactivity before a keep-alive packet is sent. The default is 7,200,000 milliseconds (ms) i.e. 2 hours.

Note that the default of 2 hours might be to high in some cases. Having a high KeepAliveTime brings two problems:

  1. it may cause a delay before the machine at one end of the connection detects that the remote machine is no longer available
  2. many firewalls drop the session if no traffic occurs for a given amount of time

In the first case, if your application can handle reconnect scenario, it will take a very long time until it notices the connection is dead and it would have been able to handle it properly if it failed fast.

In the second case, it’s the opposite, the connection is articially closed by the firewall inbetween.

If you encounter one of these cases on a regular basis, you should consider reducing the KeepAliveTime from 2 hours to 10 or 15 minutes (i.e. 600,000 or 900,000 milliseconds).

But also keep in mind that lowering the value for the KeepAliveTime:

  • increases network activity on idle connections
  • can cause active working connections to terminate because of latency issues.

Setting it to less than 10 seconds, is not a good idea except if you have a network environment with with a very low latency.


If the remote host at the other end of the connection does not respond to the keep-alive packet, it is repeated. This is where the KeepAliveInterval is used. This parameter determines how often this retry mechanism will be triggered. This is basically the wait time before another keep-alive packet is sent. If at some point in time the remote hosts responds to the keep-alive packet, the next keep-alive packet will be again sent based on the KeepAliveTime parameter (assuming the connection is still idle).

The value is the number of milliseconds before a keep-alive packet is resent. The default is 1,000 milliseconds (ms) i.e. 1 second. If the network connectivity losses sometimes last a few minutes, it’d make sense increasing this parameter to 60,000 milliseconds i.e. 1 minute.


Of course this retry process cannot go on for ever. If the connection is not only temporarily lost but lost for good, then the connection needs to be closed. This is where the parameter TcpMaxDataRetransmissions is used. This parameter defines the number of keep-alive retries to be performed before the connection is aborted.

The default value is to perform 5 TCP keep-alive retransmits. If you experience network instability and lose connections too often, you should consider increasing this value to 10 or 15.

Note that starting with Windows Vista, this parameter doesn’t exist anymore and is replaced by a hard-coded value of 10. After 10 unanswered retransmissions, the connection will be aborted. But you can still control the time frame which a connection could survive a temporary connectivity loss by adapting the KeepAliveInterval parameter.

Also note that this parameter only exists in Windows NT based versions of Windows. On old systems running Windows 95, Windows 98 or Windows ME, the corresponding parameter is HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\MSTCP\MaxDataRetries.


Tweaking the parameters above, one can configure the Windows TCP driver so that connections can survive small connectivity losses. Remember that after changing these settings, you’ll need to reboot the machine (it’s Windows after all…).

If you cannot modify TcpMaxDataRetransmissions because you have a newer version of Windows, you can still reach the same results by increasing KeepAliveInterval instead.

Also note that issues with lost connections in unstable networks seems to especially affect Windows Vista and later. So if you move from Windows XP to let’s say Windows 7 and you experience such issues, you should first add the KeepAliveTime¬† and KeepAliveInterval parameters to the registry, reboot, check whether it’s better and possibly increase the value of KeepAliveInterval if required.

All parameters above should be stored in the registry as DWORD (32bit value).

404 Error: /eyeblaster/addineyev2.html

While looking at the content access statistics in Google Analytics, I’ve noticed that a few of them access a page with the URL /eyeblaster/addineyev2.html. Since I was pretty sure I didn’t have such a page I first thought that my site might have been hacked and checked this page. It led me to the 404 error page. So this URL didn’t exist but some of my visitors were redirected to it. The traffic to this page was not so high (never more than 5 page views a day) but I could see that this added up to 25 page views since mid of February.

Next step was to find out where these visitors were coming from. The navigation summary in Google Analytics for this page looks like this:

Navigation Summary

So half of the visitors came from a previous page and half from outside. The distribution of previous pages was pretty random, so it is definitely not related to specific pages:

Previous Page Path

But actually the important most visitors leave my blog once redirected. Googling for this issue, I found a Google support page:

If you are trafficking an EyeBlaster Flash creative as a 3rd-party tag, you will need to put the addineyev2.html file to

The code for addinexev2.html can be found on It’s basically just loading a JavaScript file:

<BODY style=margin:0;padding:0>
<SCRIPT src="">

If you add this HTML page, you should also make sense to prevent search engines from indexing it by adding the following to your robots.txt:

Disallow: /eyeblaster
Disallow: /addineyeV2.html

Or by adding the following metatag to the head section of this HTML page:

<meta name="robots" content="noindex" />

But since I didn’t feel confortable adding such a webpage to my site (especially since it’s just loading some JavaScript from a third party site), I looked for another solution. If the script was actually hosted by Google I might have trusted it but it’s not the case.

Looking at the referenced JavaScript file, I found out it seems to be related to the MediaMind ad network and googling for it also showed that Eyeblaster rebranded as MediaMind. So an alternative to adding the HTML file above to your website is to block ads from this network. Actually a network which forces you add files to your site (assuming you can add files at all) should actually be blocked by default by Google. But since they haven’t done it, we’ll have to do it manually.

To block this network, you have to go to Adsense, click on Allow & block ads, type “mediamind” in the search box. You’ll see something like this:

allow block MediaMind ad network

Then click on the left side button for each of these networks to block them:

block MediaMind ad network

Now you shouldn’t get these ads which end up redirecting your users to this non-existing HTML page and you’ll display ads from networks which behave better. I’m suspecting that other networks might also be using this script, so I’ll have to monitor this and find out which other ad networks I’ll have to block.

Visual Studio 2012: mspdb110.dll is missing when executing editbin.exe

I was having some issues using an OCX in an old application I had recompiled using Visual Studio 2012. One thing I found is that it might be related to a compatibility issue with Data Execution Prevention (DEP). Since I couldn’t recompile the OCX and didn’t have direct access to the linker settings, I went for using editbin.exe to apply a /NXCOMPAT:NO. But when I ran the following:

C:\Users\benohead>"c:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\editbin.exe" /NXCOMPAT:NO myfile.exe

I got a system error from link.exe saying:

The program can’t start because mspdb110.dll is missing from your computer. Try reinstalling the program to fix this problem.

The cause for this error is that I executed the command in the wrong DOS prompt. Not the one where I had executed vcvars32.bat. So I just executed vcvars32.bat:

"c:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\vcvars32.bat"

And gave it another try. Now no error message was displayed.