Code Coverage: A misleading metric

Code coverage is the measure of how much of your code is covered while running tests. You could of course also consider manual integration tests when measuring it but usually it means automated unit test.

So this is actually a pretty useful metric. I’ve worked in projects with close to 100% code coverage and projects close to 0%. Writing code in a project with high code coverage is great because you have a safety net when changing the code. This doesn’t mean you should analyze the impact of your changes but it gives more confidence to perform required code changes.

Test coverage is a useful for finding untested code. Analyzing coverage helps you find which pieces of code that aren’t being tested. And since every piece of code not covered by tests is a potential source of errors which go undetected and it is well known that the earlier a bug is found the cheaper it is to fix it, having 100% coverage really does sound great !

In a new project without tons of legacy see code I do expect to have a high code coverage. The problem with code coverage comes from the fact that your management may not only expect high coverage but actually require a certain level of code coverage. And whenever a certain level of coverage becomes a target, developers try to reach it. The thing is that it encourages them to bend over backwards by writing poor tests just to meet the goal. It leads to writing tests for their own sake.

It is actually easy to write tests that cover code without actually checking its correctness. Just write unit tests covering the code but not containing any asserts. Obviously, code coverage does not tell you what code was tested but only what code was run. So high coverage numbers don’t necessarily mean much. High code coverage does tell you that you have a lot of tests. But it doesn’t really tell you how good your code is. Your code can be full of bugs and still you could have 100% test coverage. Pure coverage figures don’t tell you how important this code is, how complex it is, nor what’s the quality of the tests. And a high code coverage can only lead to good code if the tests you run are good. With good tests, high coverage can only be achieved with error free code. But with poor tests even crappy code can make it to 100% coverage.

So having 100% test coverage (or anything close to it) gives a false sense of confidence and of the robustness of a project. That’s why you should not make 100% test coverage the focus. Mandating a minimum code coverage under which your coverage is so bad that your automated tests are really helpful is probably fine but defining a high target test coverage threshold is useless.

Another issue with a forced march to high code coverage is that writing many tests for checking a code base which is poorly designed will likely increase the cost of changing the design later on. If you experience that simple changes to code cause disproportionally long changes to the tests i.e. if you need to fix numerous tests for each change you make in the code, it’s either a sign that there’s a problem with the tests or that your whole design is shaky.

In such cases, more having tests is not always better. Having better tests would in fact be better. You need to keep in mind that every test you’re creating is a test that will eventually have to be maintained. You need to not only write more tests to increase code coverage but evaluate whether the maintenance cost related to these additional tests justifies writing them and whether the additional code coverage reached really leads to more quality. Too many tests can slow you down. Not because the tests take time to run. You can always move slow tests to a later stage or only run them periodically. But excess tests only written to satisfy a code coverage target will drive up the cost of changes both in money and time.

On the other hand,  even though low coverage does not automatically mean your code is buggy, low coverage numbers (e.g. way below 50%) are a sign of trouble. They are not a proof but a smell of bad quality. Especially, a combination of low coverage and high complexity is not a good sign. Also, a loss of code coverage over time is a sign that code modification are not properly reflected in the tests. But we should keep in mind that low automated test coverage does not imply that the software is untested. There are other ways to test your software. Only automated tests is never enough. Product testers not only test the code on the basis of formulated requirements but also test the product looking for requirements might not have been explicitly formulated (it more often the case that you have not identified all your functionality than you think), usability issues…

Many people focus on code coverage numbers because they expect to derive from it whether they are testing enough. But in many cases, you should not worry about code coverage but about writing good tests. Code coverage is great, but your end product will be measured by its functionality and its reliability, so functionality coverage is even better. Anyway unit tests should be meant to test functionality. They are low level tests but still they should test from the perspective of the required functionality. Actually if your code and functionality are not equivalent to a large extent, then you have bigger problems than what level of code coverage you’ve reached.So the goal should not be to have 100% code coverage, but to have unit tests testing the required functionality as completely and extensively as possible.

Despite all the points above against requiring a 100% (or a very high) code coverage, the code coverage metric can actually be useful when used properly. The wide-spread negativity towards theses metrics is often due to their misuse. 100% code coverage is meaningless without implementing other habits and practices which ensure code and tests quality. So a high code coverage is anyway just the beginning, it is a good starting point to covering the actual functionality but not a goal in itself.

Actually, you should see code coverage is a byproduct of well designed and written tests, not a metric that indicates the tests are well designed or well written. On the other hand, good code with a clear design is not only easier to read and less buggy but also easier to cover. So both good quality in code and tests will lead to a higher coverage. But requiring a high code coverage will not lead to more quality in code and tests.

A way to improve the expressiveness of code coverage is to combine its measurements with other measurements like complexity measurements, correlate it with information about the importance of certain parts of the code, incorporate information about bugs reported after release…

 

Working with Enums in C#

Just wanted to share a helper class I’m using in some projects to make it easier to work with enums.

    public static class EnumHelpers
    {
        private const string MustBeAnEnumeratedType = @"T must be an enumerated type";

        public static T ToEnum<T>(this string s) where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            T @enum;
            Enum.TryParse(s, out @enum);
            return @enum;
        }

        public static T ToEnum<T>(this int i) where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            return (T) Enum.ToObject(typeof (T), i);
        }

        public static T[] ToEnum<T>(this int[] value) where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            var result = new T[value.Length];
            for (int i = 0; i < value.Length; i++)
                result[i] = value[i].ToEnum<T>();
            return result;
        }

        public static T[] ToEnum<T>(this string[] value) where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            var result = new T[value.Length];
            for (int i = 0; i < value.Length; i++)
                result[i] = value[i].ToEnum<T>();
            return result;
        }

        public static IEnumerable<T> ToEnumFlags<T>(this int i) where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            return
                (from flagIterator in Enum.GetValues(typeof (T)).Cast<int>()
                    where (i & flagIterator) != 0
                    select ToEnum<T>(flagIterator));
        }

        public static bool CheckFlag<T>(this Enum value, T flag) where T : struct, IConvertible
        {
            if (!typeof(T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            return (Convert.ToInt32(value, CultureInfo.InvariantCulture) & Convert.ToInt32(flag, CultureInfo.InvariantCulture)) != 0;
        }

        public static IDictionary<string, T> EnumToDictionary<T>() where T : struct, IConvertible
        {
            if (!typeof (T).IsEnum) throw new ArgumentException(MustBeAnEnumeratedType);
            IDictionary<string, T> list = new Dictionary<string, T>();
            Enum.GetNames(typeof (T)).ToList().ForEach(name => list.Add(name, name.ToEnum<T>()));
            return list;
        }

The first ToEnum extension allows you to write something like this:

"Value1".ToEnum<MyEnum>()

It adds a ToEnum method to the string class in order to convert a string to an enum value.

The second ToEnum extension does the same but with ints e.g.:

0.ToEnum<MyEnum>()

The thid ToEnum extension is similar but extends int arrays to return a collection of enum values e.g.:

new[] {1, 0}.ToEnum<MyEnum>()

The forth ToEnum extension does the same but with string arrays e.g.:

new[] {"Value2", "Value1"}.ToEnum<MyEnum>()

The ToEnumFlags extension returns a list of enum values when working with flags e.g.:

13.ToEnumFlags<MyEnum2>().Select(e => e.ToString()).Aggregate((e, f) => (e + " " + f))

The EnumToDictionary method returns a dictionary with the enum item name as key and values e.g.:

EnumHelpers.EnumToDictionary<MyEnum>()["Value2"]

The CheckFlag extension converts checks whether an enum flag is set e.g.:

(MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag4)

To test this you can use the following:

    internal enum MyEnum
    {
        Value1,
        Value2,
    }

    [Flags]
    internal enum MyEnum2
    {
        Flag1 = 1,
        Flag2 = 2,
        Flag3 = 4,
        Flag4 = 8,
    }

    public static class MainClass
    {
        public static void Main()
        {
            Console.WriteLine("1) {0}", (int) "Value1".ToEnum<MyEnum>());
            Console.WriteLine("2) {0}", 0.ToEnum<MyEnum>());
            Console.WriteLine("3) {0}", new[] {1, 0}.ToEnum<MyEnum>());
            Console.WriteLine("4) {0}", new[] {"Value2", "Value1"}.ToEnum<MyEnum>());
            Console.WriteLine("5) {0}", 13.ToEnumFlags<MyEnum2>().Select(e => e.ToString()).Aggregate((e, f) => (e + " " + f)));
            Console.WriteLine("6) {0}", (int) EnumHelpers.EnumToDictionary<MyEnum>()["Value2"]);    
            Console.WriteLine("7) {0}", (MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag4));
            Console.WriteLine("8) {0}", (MyEnum2.Flag1 | MyEnum2.Flag4).CheckFlag(MyEnum2.Flag3));
        }
    }

which will output:

1) 0
2) Value1
3) Playground.MyEnum[]
4) Playground.MyEnum[]
5) Flag1 Flag3 Flag4
6) 1
7) True
8) False

I guess most methods are pretty basic but sometimes it’s easier to just have a helper class and not have to think about how to solve the problem again and again (even though it’s not a very complex problem)…

Three options to dynamically execute C# code

I’ve been working in the past few months on a project where users can write C# to query data from a model. This means that I needed to dynamically execute C# code entered by the user.

There are basically three options for compiling and executing C# code dynamically:

  • Using the CodeDOM compiler
  • Using the Mono Compiler Service
  • Using Roslyn

Using the CodeDOM (Code Document Object Model), you can dynamically compile source code into a binary assembly. The steps involved are:

  1. Create an instance of the CSharpCodeProvider
  2. Compile an assembly from source code
  3. Get the type from the assembly
  4. Instantiate this type
  5. Get a reference to the method
  6. Call the method with the appropriate parameters

Here’s an example how to implement steps 1 and 2:

        private static Assembly CompileSourceCodeDom(string sourceCode)
        {
            CodeDomProvider cpd = new CSharpCodeProvider();
            var cp = new CompilerParameters();
            cp.ReferencedAssemblies.Add("System.dll");
            cp.GenerateExecutable = false;
            CompilerResults cr = cpd.CompileAssemblyFromSource(cp, sourceCode);

            return cr.CompiledAssembly;
        }

The other steps can then be implemented like this:

        private static void ExecuteFromAssembly(Assembly assembly)
        {
            Type fooType = assembly.GetType("Foo");
            MethodInfo printMethod = fooType.GetMethod("Print");
            object foo = assembly.CreateInstance("Foo");
            printMethod.Invoke(foo, BindingFlags.InvokeMethod, null, null, CultureInfo.CurrentCulture);
        }

Since you can only compile full assemblies with the CodeDOM, this means that:

  • You cannot execute snippets of code, you need a full class definition
  • Every time you compile some code a new assembly will be created and loaded

Using the Mono Compiler Service, you can also execute code dynamically. The big difference here (except the fact that you need a few other assemblies) is that MCS allows you to evaluate code with needing to define a class with methods so instead of this:

class Foo
{
    public void Print()
    {
        System.Console.WriteLine(""Hello benohead.com !"");
    }
}

You could just evaluate the following:

System.Console.WriteLine(""Hello benohead.com !"");

This is of course very useful when dynamically executing pieces of code provided by a user.

Here is a minimal implementation using MCS:

private static void ExecuteMono(string fooSource)
{
    new Evaluator(new CompilerContext(new CompilerSettings(), new ConsoleReportPrinter())).Run(fooSource);
}

If you need a return value, you can use the Evaluate method instead of the Run method.

One other advantage of MCS is that it doesn’t write an assembly to disk and then loads it. So when you evaluate 1000 expressions, you don’t end up loading 1000 new assemblies.

The last and latest option to handle dynamic C# code execution is to use the .NET Compiler Platform Roslyn. Roslyn is a set of open-source compilers and code analysis APIs for C# and Visual Basic.NET.

Dynamically executing code with Roslyn is actually pretty similar to the way you would do it with the CodeDOM. You have to create an assembly, get an instance of the class and execute the method using reflection.

Here is an example how to create the assembly:

        private static Assembly CompileSourceRoslyn(string fooSource)
        {
            using (var ms = new MemoryStream())
            {
                string assemblyFileName = "gen" + Guid.NewGuid().ToString().Replace("-", "") + ".dll";

                CSharpCompilation compilation = CSharpCompilation.Create(assemblyFileName,
                    new[] {CSharpSyntaxTree.ParseText(fooSource)},
                    new[]
                    {
                        new MetadataFileReference(typeof (object).Assembly.Location)
                    },
                    new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary)
                    );

                compilation.Emit(ms);
                Assembly assembly = Assembly.Load(ms.GetBuffer());
                return assembly;
            }
        }

Note that I do not handle any error scenarios here (or in the other pieces of code in this post) but to use it in a production system, you’d of course need to do it.

As with CodeDOM, a new assembly is created every time code is dynamically compiled. The difference is that you need to create a temporary assembly name yourself.

Executing the code from the assembly is then the same as with the CodeDOM.

Now let’s look at it from a performance perspective. Running the dynamic execution 100 times in a row for using the three methods, I got the following results:

roslyn: assemblies loaded=114
roslyn: time=2049,2049 milliseconds

codedom: assemblies loaded=100
codedom: 7512,7512 milliseconds

mono: assemblies loaded=6
mono: time=15163,5162 milliseconds

Running it multiple times has shown that the numbers are pretty stable.

So you can see that with Roslyn and CodeDOM, the number of assemblies loaded is increasing with the number of runs. The 14 more assemblies with Roslyn as well as the 6 assemblies with MCS are due to the fact that the assembly required by the compilers themselves are not loaded when the program is started but when they are used.

From a number of loaded assemblies point of view, MCS is a clear winner. It doesn’t need to load any assemblies except the assemblies required by the compiler.

From a performance perspective, Roslyn is quite impressive being more than 3 times faster than the CodeDOM and more than 7 times faster than MCS.

The main disadvantage of Roslyn versus CodeDOM is that Roslyn is not yet production ready and is pretty much work in progress. Once Roslyn is more feature complete and production ready, there will be no reasons to further use CodeDOM for dynamic code execution.

Although it’s the slowest solution, MCS is the one which scales better so if you have to execute a non-finite number of dynamic code, it’s the way to go. Also if you are just executing snippets of code and not whole classes, it makes your life easier.

 

Microsoft Word 2007: Bullets and numbering in unprotected sections

Let’s assume you have a template where you only want users to be able to write to a few sections of the document but not in the rest e.g. because the rest is automatically generated, you would protect the document and make these 2 sections unprotected. In Word 2003, it is possible to write text in there and format things however you want including numbered or bulleted lists.

Now if you open a document based on this template in Word 2007, you’ll still be able to write text in the unprotected document but when you select text and right click to open the context menu, you’ll see that bullets and numbering are dimmed (disabled). Also the buttons for bullets and numbering in the toolbar are disabled.

What’s strange (apart from the fact that it worked fine in Word 2003 but not in Word 2007) is that writing a start and space (or a 1 and space) will automatically create a bulleted list. So it’s not just that bullets and numbering doesn’t work but just that it’s disabled in the context menu and toolbars.

The only solution for this is to defined quick styles  (the ones you see in the Styles group of the Home tab) with the appropriate formatting and use them to apply this formatting to the text in the unprotected section. So just define all the styles you want to use in a protected document as quick styles and the problem is solved ! This doesn’t seem to make much sense but it works. It looks like Word 2007 feels that applying existing styles in an unprotected section of a protected document is fine but using the formatting options in the context menu or toolbar is actually modifying styles.

Also if all you need are bullets, you can also use the shortcut for the bulleted list: Ctrl+Shift+L.

WordPress: 2D and 3D taxonomy cloud

Some time ago, I published a WordPress plugin called WP Category Tag Cloud.

With this plugin, you can display a configurable 2D or 3D cloud of taxonomy terms (like tags, categories or any custom taxonomy) and filter them by tags or categories.

My first use case was to display a list of tags and only show tags used in posts belonging to a give category. For example, if you have two areas in your web site (like a business web page and a blog), and you separate these two by using different categories but the tags can partly overlap, you can use this plugin to define a widget which will only show tags used in posts of the blog category.

And over time, there were a few other use cases which came from web sites I am hosting or from user requests. So this plugin provides many options to define how the cloud should be rendered and what should be contained in the cloud.

The cloud elements can be displayed in many different ways:

as a flat list separated by spaces:

Linux Sybase Wordpress Java Mac CSharp SEO PHP CSS Windows

as a bulletted list:

as price tags:

as rectangular tags with rounded corners:

as horizontal bars:

as a 3D HTML5 based tag cloud:

You can also see this one in action in the side bar of this blog.

 

Additionally, you can set many options to tweak the way the cloud is displayed:

randomly colorize terms:

Linux Sybase Wordpress Java Mac CSharp SEO PHP CSS Windows

make less used terms less opaque:

Linux Sybase Wordpress Java Mac CSharp SEO PHP CSS Windows

randomly tilt terms:

Linux Sybase Wordpress Java Mac CSharp SEO PHP CSS Windows

all of the above:

Linux Sybase Wordpress Java Mac CSharp SEO PHP CSS Windows

 

There are even more options available to make this widget look like the way you want it to. And new options are added in every new version.

In order to make sure the performance is good, you can have the widgets cached and define how long they should be cached. Of course, if you change any setting of a widget , it will be regenerated even if it was already in cache.

The widget can be displayed in the sidebar using a widget, in any post or page using a shortcode or anywhere else using the provided PHP function. For more details, please check the FAQ of the plugin.

There are still a few other features that I have on my list but I feel this plugin has already evolved over the past few weeks into one of the more flexible and powerful term cloud plugins for WordPress.

To give it a try, go to the plugin page on WordPress.org and download the plugin. The installation is as easy as searching for it in the plugin installation age in your WordPress installation (or uploading the folder wp-category-tag-could to the /wp-content/plugins/), activating the plugin through the ‘Plugins’ menu in WordPress and selecting the widget and configure it or using the PHP function or the short code.

If you try this plugin please do not forget to leave a review here. And if you find issues or have ideas how to improve the plugin, use the support feature on wordpress.org.

Windows: Network connections timing out too quickly on temporary connectivity loss

If you have a rather unstable network where you tend to loose connectivity for a short time frequently, you might notice that established connections (e.g. ssh connections using putty) will get lost. You can then immediately reconnect but it’s still a pain.

The issue is not really with the software loosing the connection (e.g. putty) but rather with the Windows network configuration. A single application cannot set the network settings for the whole application or a specific session to prevent this problem. To solve this problem, you will need to tweak a few Windows network settings.

Basically tweaking these settings means increasing the TCP timeout in Windows. This can be done in the registry.

The relevant TCP/IP settings are:

  • KeepAliveTime
  • KeepAliveInterval
  • TcpMaxDataRetransmissions

These parameters are all located at the following registry location: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Tcpip\Parameters.

On Windows versions which are not based on Windows NT (i.e. Windows 95, Windows 98 and Windows ME), these parameters are located under: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\MSTCP.

KeepAliveTime

The KeepAliveTime parameters controls how long the TCP driver waits until the a keep-alive packet is sent over an idle TCP connection. A TCP keep-alive packet is simply an ACK packet sent over the connection with the sequence number set to one less than the current sequence number for the connection. When the other end receives this packet, it will send an ACK as a response with the current sequence number. These communication is used to make sure that the remote host at the other end of the connection is still available and make sure the connection is kept open.

Since TCP keep-alives are disabled by default, the application opening the connection needs to specifically enable them.

The value is the number of milliseconds of inactivity before a keep-alive packet is sent. The default is 7,200,000 milliseconds (ms) i.e. 2 hours.

Note that the default of 2 hours might be to high in some cases. Having a high KeepAliveTime brings two problems:

  1. it may cause a delay before the machine at one end of the connection detects that the remote machine is no longer available
  2. many firewalls drop the session if no traffic occurs for a given amount of time

In the first case, if your application can handle reconnect scenario, it will take a very long time until it notices the connection is dead and it would have been able to handle it properly if it failed fast.

In the second case, it’s the opposite, the connection is articially closed by the firewall inbetween.

If you encounter one of these cases on a regular basis, you should consider reducing the KeepAliveTime from 2 hours to 10 or 15 minutes (i.e. 600,000 or 900,000 milliseconds).

But also keep in mind that lowering the value for the KeepAliveTime:

  • increases network activity on idle connections
  • can cause active working connections to terminate because of latency issues.

Setting it to less than 10 seconds, is not a good idea except if you have a network environment with with a very low latency.

KeepAliveInterval

If the remote host at the other end of the connection does not respond to the keep-alive packet, it is repeated. This is where the KeepAliveInterval is used. This parameter determines how often this retry mechanism will be triggered. This is basically the wait time before another keep-alive packet is sent. If at some point in time the remote hosts responds to the keep-alive packet, the next keep-alive packet will be again sent based on the KeepAliveTime parameter (assuming the connection is still idle).

The value is the number of milliseconds before a keep-alive packet is resent. The default is 1,000 milliseconds (ms) i.e. 1 second. If the network connectivity losses sometimes last a few minutes, it’d make sense increasing this parameter to 60,000 milliseconds i.e. 1 minute.

TcpMaxDataRetransmissions

Of course this retry process cannot go on for ever. If the connection is not only temporarily lost but lost for good, then the connection needs to be closed. This is where the parameter TcpMaxDataRetransmissions is used. This parameter defines the number of keep-alive retries to be performed before the connection is aborted.

The default value is to perform 5 TCP keep-alive retransmits. If you experience network instability and lose connections too often, you should consider increasing this value to 10 or 15.

Note that starting with Windows Vista, this parameter doesn’t exist anymore and is replaced by a hard-coded value of 10. After 10 unanswered retransmissions, the connection will be aborted. But you can still control the time frame which a connection could survive a temporary connectivity loss by adapting the KeepAliveInterval parameter.

Also note that this parameter only exists in Windows NT based versions of Windows. On old systems running Windows 95, Windows 98 or Windows ME, the corresponding parameter is HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\MSTCP\MaxDataRetries.

Summary

Tweaking the parameters above, one can configure the Windows TCP driver so that connections can survive small connectivity losses. Remember that after changing these settings, you’ll need to reboot the machine (it’s Windows after all…).

If you cannot modify TcpMaxDataRetransmissions because you have a newer version of Windows, you can still reach the same results by increasing KeepAliveInterval instead.

Also note that issues with lost connections in unstable networks seems to especially affect Windows Vista and later. So if you move from Windows XP to let’s say Windows 7 and you experience such issues, you should first add the KeepAliveTime  and KeepAliveInterval parameters to the registry, reboot, check whether it’s better and possibly increase the value of KeepAliveInterval if required.

All parameters above should be stored in the registry as DWORD (32bit value).

404 Error: /eyeblaster/addineyev2.html

While looking at the content access statistics in Google Analytics, I’ve noticed that a few of them access a page with the URL /eyeblaster/addineyev2.html. Since I was pretty sure I didn’t have such a page I first thought that my site might have been hacked and checked this page. It led me to the 404 error page. So this URL didn’t exist but some of my visitors were redirected to it. The traffic to this page was not so high (never more than 5 page views a day) but I could see that this added up to 25 page views since mid of February.

Next step was to find out where these visitors were coming from. The navigation summary in Google Analytics for this page looks like this:

Navigation Summary

So half of the visitors came from a previous page and half from outside. The distribution of previous pages was pretty random, so it is definitely not related to specific pages:

Previous Page Path

But actually the important most visitors leave my blog once redirected. Googling for this issue, I found a Google support page:

If you are trafficking an EyeBlaster Flash creative as a 3rd-party tag, you will need to put the addineyev2.html file to www.example.com/eyeblaster/addineyev2.html

The code for addinexev2.html can be found on adopstools.com. It’s basically just loading a JavaScript file:

<HTML>
<HEAD>
</HEAD>
<BODY style=margin:0;padding:0>
<SCRIPT src="http://ds.eyeblaster.com/BurstingScript/addineye.js">
</script>
</BODY>
</HTML>

If you add this HTML page, you should also make sense to prevent search engines from indexing it by adding the following to your robots.txt:

Disallow: /eyeblaster
Disallow: /addineyeV2.html

Or by adding the following metatag to the head section of this HTML page:

<meta name="robots" content="noindex" />

But since I didn’t feel confortable adding such a webpage to my site (especially since it’s just loading some JavaScript from a third party site), I looked for another solution. If the script was actually hosted by Google I might have trusted it but it’s not the case.

Looking at the referenced JavaScript file, I found out it seems to be related to the MediaMind ad network and googling for it also showed that Eyeblaster rebranded as MediaMind. So an alternative to adding the HTML file above to your website is to block ads from this network. Actually a network which forces you add files to your site (assuming you can add files at all) should actually be blocked by default by Google. But since they haven’t done it, we’ll have to do it manually.

To block this network, you have to go to Adsense, click on Allow & block ads, type “mediamind” in the search box. You’ll see something like this:

allow block MediaMind ad network

Then click on the left side button for each of these networks to block them:

block MediaMind ad network

Now you shouldn’t get these ads which end up redirecting your users to this non-existing HTML page and you’ll display ads from networks which behave better. I’m suspecting that other networks might also be using this script, so I’ll have to monitor this and find out which other ad networks I’ll have to block.

Visual Studio 2012: mspdb110.dll is missing when executing editbin.exe

I was having some issues using an OCX in an old application I had recompiled using Visual Studio 2012. One thing I found is that it might be related to a compatibility issue with Data Execution Prevention (DEP). Since I couldn’t recompile the OCX and didn’t have direct access to the linker settings, I went for using editbin.exe to apply a /NXCOMPAT:NO. But when I ran the following:

C:\Users\benohead>"c:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\editbin.exe" /NXCOMPAT:NO myfile.exe

I got a system error from link.exe saying:

The program can’t start because mspdb110.dll is missing from your computer. Try reinstalling the program to fix this problem.

The cause for this error is that I executed the command in the wrong DOS prompt. Not the one where I had executed vcvars32.bat. So I just executed vcvars32.bat:

"c:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\vcvars32.bat"

And gave it another try. Now no error message was displayed.

WordPress: Network-wide plugin settings

When developing a plugin, which will be used on single site, as well as multi-site, there are two ways of supporting multi-site deployments:

  1. The plugin is activated per site and provides per-site settings only.
  2. The plugin can be activated for the whole network and provides network-wide settings

I’m currently working on a plugin which needs to support both scenarios. The first scenario is pretty straight forward. But the second one is a little bit trickier.

  • Proper support of the network activation
  • Activation for new sites
  • Network-wide settings
  • Proper support of the network deactivation

Proper support of the network activation

To activate the plugin in a multisite environment, you’ll need go through all the blogs and activate them individually e.g.:

register_activation_hook( __FILE__, 'my_plugin_activate' );

function my_plugin_activate($network_wide)
{
	global $wpdb;

	if (function_exists('is_multisite') && is_multisite() && $network_wide) {
		$current_blog = $wpdb->blogid;
		$blogs = $wpdb->get_col("SELECT blog_id FROM $wpdb->blogs");
		foreach ($blogs as $blog) {
			switch_to_blog($blog);
			my_plugin_activate();
		}
		switch_to_blog($current_blog);
	} else {
		my_plugin_activate();
	}
}

If we are in a multisite environment but the plugin is only activated to for a blog, a normal activation is performed. Otherwise we fetch the list of all blogs and activate them one by one before reverting to the current blog.

Note that you should not use restore current blog() to get back to the original blog since it will revert to the blog active before the last switch_to_blog(). So if switch_to_blog() is called twice it will not revert to the blog which was active before this all started.

Activation for new sites

When a new blog is added, you’ll need to specifically activate the plugin for this blog since this blog was not present, when the activation occurred.

add_action( 'wpmu_new_blog', 'activate_new_blog' );
 
function activate_new_blog($blog_id) {
    global $wpdb;
 
    if (is_plugin_active_for_network('plugin_name/plugin_main_file.php')) {
	switch_to_blog($blog_id);
	my_plugin_activate();
	restore_current_blog();
    }
}

So first we check whether the plugin was activated for the whole network. If it is the case, we switch to the new blog, activate the plugin for this blog and switch back to the previous blog. Here it is fine to use restore_current_blog() since we only use one switch_to_blog().

Network-wide settings

In single site environment, you would use the functions get_option() and update_option() to respectively read and write your plugin settings. In a multi-site environment, you should rather use the get_site_option() and update_site_option() functions instead.

So instead of:

get_option('my_settings', array())
...
update_option( 'my_settings', $settings );

Use:

get_site_option('my_settings', array())
...
update_site_option( 'my_settings', $settings );

When registering your settings page, instead of using options-general.php as parent slug, when calling add_submenu_page, if your plugin has been network activated, you’ll need to add the submenu page using settings.php as parent slug.

Also when linking to your settings page (i.e. from the plugins list), in a single site environment, you’d do it this way:

<a href="options-general.php?page=my_settings">Settings</a>

But to link to the network settings page, you’ll need to use the following

<a href="settings.php?page=my_settings">Settings</a>

The hook for adding the link is also different. Instead of adding a filter for ‘plugin_action_links_pluginname_pluginfile’, you’ll have to add a filter for ‘network_admin_plugin_action_links_pluginname_pluginfile’.

Also the hook to use to call your registering function is different, instead of adding an action to the ‘admin_menu’ hook, you should add an action to the ‘network_admin_menu’ hook in a network activation scenario.

In your settings page, when in a single site environment, you can use the settings API and do not need to update your settings manually, by using options.php as form action:

<form method="post" action="options.php">

For network-wide settings it’s not so easy. There is no direct equivalent to options.php for such settings. You will need to direct to edit.php and provide an action name which will be linked to one of your functions where you will update the settings. Here the registration and definition of this function:

add_action('network_admin_edit_my_settings', __FILE__, 'update_network_setting');

function update_network_setting() {
	update_site_option( 'my_settings', $_POST[ 'my_settings' ] );
	wp_redirect( add_query_arg( array( 'page' => my_settings, 'updated' => 'true' ), network_admin_url( 'settings.php' ) ) );
	exit;
}

Of course instead of my_settings, you should use the name of your settings. The name of the action hook is network_admin_edit_ followed by an action name which will be used as parameter to edit.php in the form definition e.g.:

<form method="post" action="edit.php?action=my_settings">

Of course, since you want to support both the normal and the network activation scenarios, you’ll have to use is_plugin_active_for_network() to figure out in which scenario you are and trigger the appropriate logic.

Proper support of the network deactivation

For the deactivation, you need to do exactly the same you’ve already done for the activation i.e.:

register_deactivation_hook( __FILE__, 'my_plugin_deactivate' );

function my_plugin_deactivate($network_wide)
{
	global $wpdb;

	if (function_exists('is_multisite') && is_multisite() && $network_wide) {
		$current_blog = $wpdb->blogid;
		$blogs = $wpdb->get_col("SELECT blog_id FROM $wpdb->blogs");
		foreach ($blogs as $blog) {
			switch_to_blog($blog);
			my_plugin_deactivate();
		}
		switch_to_blog($current_blog);
	} else {
		my_plugin_deactivate();
	}
}

function my_plugin_deactivate()
{
	//deactivate the plugin for the current blog
}

Of course in a network-wide deactivation scenario, you may want to add some additional cleanup after the plugin has been deactivated for all blogs.

Conclusion

So supporting all three scenarios (single site, multisite with per blog activation and multisite with network activation) is not so difficult. The only tricky part is handling network-wide settings. Unfortunately, the settings API does not help you much here. But once you’ve done it for a plugin, it’s just a matter of copy&paste.

Note that if you do not perform anything special as part of your activation, you do not need to handle the multisite activation since WordPress will activate the plugins appropriately. It is only required if your plugin does some additional magic since WordPress will not call you for each blog.