About WordPress Multisite

WordPress multisite has been aroung for quite some time now. But since many users still have a single-site installation and it’s not so easy to install WP Multisite using alternative ports (see this post), you will see that first not all plugins behave well in a multisite environment. Second, if you have a problem and google for it you will most likely find solutions working on single site installations but it’s not always to find out whether it will work in a multisite environment and if not, how to modify it to work in such an environment.

That’s the reason why I’ve started writing this post. First, I am a plugin author and need to have my plugins work in a multisite environment. Second, I am considering moving from multiple single sites to a multisite installation myself.

What’s WordPress Multisite?

WordPress Multisite was introduced in WordPress 3.0. It basically allows you to host multiple sites in a single WordPress installation. This means that once your WordPress installation is converted to multisite, it becomes a network of sites, with one central site used to administrate the network and a series of virtual sites building up your network.

The sites in this network are then not completely independent of another. They share some features. But they also have their own data and settings. So it is important to understand what’s shared and what not.

Just before we really start: in the WordPress multi-user solution which was available before Multisite was introduced in WordPress 3.0 the wording was a little bit different. That why I tend to sometimes mix it all. I’ll try to stick to the WP 3.0 wording but can’t promise I’ll always manage to do it.

What used to be called a Blog before WP 3.0 is now a site.
What used to be called a site before WP 3.0 is now a network.

What’s different with WordPress Multisite?

There are a few differences between a multisite based network of sites and a network of individual WordPress installations. Except for the WordPress PHP scripts themselves, a WordPress installation is basically made of some database content and the contents of the wp-content subdirectory.

wp-content

Let’s start with wp-content. It contains your plugins, themes and uploads (in three different directories).

Plugins

The plugins are installed for the whole network. Since you cannot install plugins per sites, there is no difference difference in the directory structure.

Note that this doesn’t mean that when you install a plugin it has to be active on all sites in the network. But it does mean that if a plugin is not installed globally, a site administrator cannot install this plugin just for a single site.

It also means that when a plugin is updated, it’s updated for all sites in the network. This is a good thing since you do not have to update each plugin for each site individually. On the other side, if you know that one site has a problem with the new version of a plugin and another site needs the new version, there is no way to handle this in a WordPress Multisite installation.

But even though the installation files are the same for all sites in the network give a specific plugin, this doesn’t mean that the settings of this plugins have to be the same for all sites. Depending on the plugin, you may want to have a site specific configuration or a network-wide configuration. See this post to learn more about network-wide plugin settings.

Also since the plugin files are available globally, this means that as a plugin developer, you do not need to take care of Multisite installations when enqueuing files (JavaScript or CSS files).

Themes

Themes work the same way as plugins in this case. Themes are installed globally and a user administrating a site in the network can activate these themes for his site. You can also restrict themes for individual sites and make them only available to a subset of the sites.

When working with themes and Multisite, it is important to keep in mind that whenever you update a file of the theme, you’re not modifying the file for one site but for all sites using this theme. So if you use a theme and would like to change a few things e.g. some CSS, instead of modifying the theme itself, you should do one of the following:

  • Create child theme to use in the individual sites
  • If it is already a child theme, you should create a copy of the theme before modifying it. The drawback is that you will not get automatic updates from wordpress.org anymore since it’s not the original theme anymore
  • Use themes which provide you with means to make the adaptations you need in their settings
  • If all you want to change are CSS styles, there are also plugins allowing you to load some additional styles
Uploads

The uploads work in a different way. Unlike the plugins and themes, upload can either be available on the network level or for a specific site.

So under the uploads directory, you will find one subfolder per year for the network-level uploads and a “sites” subdirectory. The “sites” subdirectory contains in turn one subdirectory per site. The names of this subdirectory are numbers representing the site ID. And in these sites subdirectories, you will again find a subdirectory per year.

The database

First of all, even in a Multisite installation, all tables, whether central tables or tables for individual sites in the network, are stored in a single MySQL database.

Assuming you chose the prefix “wp_” when installing WordPress, all central tables will be called wp_something and the site specific tables will be called wp_X_something, where X is the site ID.

The central site of your network will contain all tables you’d normally see in a single site WordPress installation. The individual sites in the network will only have a subset of these tables, containing the data which are stored per site:

  • wp_X_comments and wp_X_commentmeta contain data related to comments
  • wp_X_posts and wp_X_postmeta contain data related to posts
  • wp_X_terms, wp_X_term_relationships and wp_X_term_taxonomy contain data related to terms and taxonomies
  • wp_X_links contains site specific links
  • wp_X_options contains sites specific settings and options

wp_X_options also includes data about plugins and themes available on individual sites.

As said before, the central site also has all these tables. Additionally, it also has a few more database tables:

  • wp_blogs contains a list of all sites in the network. It’s still called “blogs” as in the old wording.
  • wp_blog_versions contains the db version of each site. I guess you usually don’t care about this table.
  • wp_users contains all users in the network
  • wp_usermeta contains data related to the users in wp_users
  • wp_registration_log contains a list of which users are registered on which site and when they registered
  • wp_signups contains a list of users which signed up on you network along with the activation key and when/whether they were activated
  • wp_site contains a list of networks and their primary domain. So you could have WordPress installation with multiple networks with each network having multiple sites
  • wp_sitemeta contains additional information about the networks

Handling WordPress Multisite in your code

First of all, most WordPress functions you might be using are related to the current site. So when you get e.g. a list of categories using get_categories(), you will see only the categories for this one site. If you need to get e.g. the categories for all sites, it will require some extra work.

There are 5 functions which are relevant for such things:

  • get_current_site()
  • switch_to_blog()
  • restore_current_blog()
  • wp_get_sites()

get_current_site

This function returns a site object containing the following public variables:

  • id: this is the site ID (number)
  • domain: this is the domain name of the current site
  • path: this is the URL path of the current site
  • site_name: this is the name of the current site

You can also get the ID of the current site using get_current_blog_id().

switch_to_blog

This function takes a site ID as parameter. If the provided site ID doesn’t exist, it will return false.

Once you’re done doing whatever you need to do with this other site, you can get back to the original site using restore_current_blog().

restore_current_blog

You should call restore_current_blog() after every call to switch_to_blog(). The reason is that WordPress keeps track of site switches and if you do not use restore_current_blog, the global variable $GLOBALS['_wp_switched_stack'] will be left with some values in there.

An alternative is to clean up $GLOBALS['_wp_switched_stack'] yourself:

$GLOBALS['_wp_switched_stack'] = array();
$GLOBALS['switched'] = false;

wp_get_sites

This function returns an array with the list of all sites in the network. You can use this function to get a list of all sites and iterate through them (and switching from one blog to the next one using switch_to_blog()).

Iterating through sites

In order to iterate through sites e.g. in order to get all categories for all sites, you can do the following:

// store the original current site
$original_site = get_original_site();

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

}

// return to the original site
switch_to_blog( $original_site->id );

Note that by doing this, the global variables tracking the site switches will not be reset. So it is recommended to rather do:

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

	// restore the original site
	restore_current_blog();
}

Or something like this:

// store the original current site
$original_site = get_original_site();

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

}

// return to the original site
switch_to_blog( $original_site->id );

// manually reset the global variables tracking site switching
unset ( $GLOBALS['_wp_switched_stack'] );
$GLOBALS['switched'] = false; 

 TO BE CONTINUED…

Sybase ASE: using CPU affinity to prevent all engines from using the same CPU

Number of engines

ASE creates an OS-level process for each engine you define. You can change the number of engines by using the following command:

sp_configure "engine", N

(replace N by the number of engines you want to configure).

Usually, if you server is almost exclusively used for ASE and you have X CPU cores available, you will want to configure X-1 engines. So assuming you have a dedicated server with 4 CPU cores, you’ll want to configure 3 engines.

You can change this setting also in the SYBASE.cfg:

[Processors]
max online engines = 3
number of engines at startup = 3

The first line defines that there will be 3 engines and the second one that all 3 engines will be started by default.

Even though in many cases, it makes sense to set the same value to both parameters so that you automatically use all available engines. You can also set the second one to a lower value and benchmark the system with less engines and then bring one additional engine online after another.

Increasing the max number of online engines to higher number than the number of available logical CPU’s makes no sense. So I’d always recommend setting it to the total number of logical CPU’s or this number minus 1. Whether you bring them all online at startup or not depends on what else is running on the system and the specific workload you have on this server.

If you configure too many ASE engines for the underlying CPU’s, you will observe some significant loss of throughput. It is due to the high number of involuntary context switches.

Hyper-Threading

Hyper-Threading creates “virtual CPU’s”. So an application running on a system where Hyper-Threading is enabled will think that there are twice as many CPUs as physically available. ASE will not make any difference between virtual CPU’s and real CPU’s.

Although Hyper-Threading provides the ability run two ASE engines for one physical processor, you need to keep in mind that it is still not equivalent to running two engines with two physical processors.

In many cases, you should consider switching off Hyper-Threading. Except if you actually only have very few physical CPU cores available, HT will probably not bring you the expected benefits. You might run into problems because ASE doesn’t see that two CPU’s are basically running on the same physical CPU and it should rather distribute the load between physical CPU’s instead of distributing two queries to the 2 CPU’s running on same physical CPU. Also ASE could schedule queries run at the same time to run only on the virtual CPU’s while it would be better to run them on the real CPU’s (although theoretically, there should be no difference in performance between a virtual CPU and a real one).

But keep in mind that whether HT will bring performance benefits or on the contrary make you system slower really depends on the system itself. It highly depends on your actual hardware and workload. So benchmarking it on the specific system might still be a good idea.

ASE 15.7 comes with a threaded kernel. It takes advantage of threaded CPU architectures. It can thus reduce context switching to threads instead of processes, which brings a performance boost. But this is not related to Hyper-Threading.

Using the default kernel for ASE 15.7, each engine is a thread which lives in a thread pool instead of being an OS process (which was already the case for ASE on Windows even before ASE 15.7).

CPU Affinity

The processes for the different ASE engines have by default no affinity to the physical or virtual processors. Usually, it is not required to force any CPU affinity as ASE will handle it properly.

Here’s an example with 3 engines running and 3 heavy queries running in parallel:

ASE parallel queries good case

Here you see that there are 4 CPUs and 3 engines running using CPU0, CPU1 and CPU3. You can also press “f”, “j” and return in top to have an additional column displayed which will explicitely show which engine is using which CPU:

ASE parallel queries good case with CPU number

The column “p” shows that the 3 dataserver processes use the CPUs 0,1 and 3.

In some cases (not sure when or why this happens), you will see that all dataserver processes will be using the same CPU even though they are processing different queries. Since multiple tasks have to be handled by the same CPU, this will make each task slower and also cause alot of overhead due to task switching.

If this happens, you can use the a “dbcc tune” command to configure a CPU affinity. This can be done by using the following command:

dbcc tune(cpuaffinity, -1, "on")

The -1 parameter is the start CPU. This one will always be skipped. So setting it to -1 means that:

  • The first engine will be bound to CPU0
  • The second one to CPU1
  • The third one to CPU2

If you want to keep CPU0 for other processes, you’d use:

dbcc tune(cpuaffinity, 0, "on")

This will do the following:

  • The first engine will be bound to CPU1
  • The second one to CPU2
  • The third one to CPU3

After that you should see that all dataserver processes are using different CPUs.

Note: The setting will be active only until the ASE server is restarted. So the dbcc tune command must be reissued each time ASE is restarted.

Also note that some operating systems do not support CPU affinity. In this case, the dbcc tune command will be silently ignored.

.NET as Open Source

Microsoft is open sourcing some .NET components. This follows steps taken by Microsoft in this direction in the past few years (some may say baby steps with heavy marketing).

I still remember a time when you had Open Source projects on one side and Microsoft on the other side considering Open Source a “cancer that attaches itself in an intellectual property sense to everything it touches”.

Having worked for years with Open Source software in on different platforms, I didn’t immediately embrace the .NET platform which basically belonged to the Evil Empire. But at some point in my career, I was forced to start working with C#. Even though there were many aspects which I liked, the lack of openness was always an issue for me.

A little bit more than a year ago, I started working on a project where I use the Mono Compiler Service, NRefactory and the Orchard CMS. Even though I had been programming in C# for a few years already, this is about the time when I started writing more articles related to .NET.

So hearing that more .NET component are being released under the MIT license is good news.

Of course, it doesn’t mean that .NET is now a completely open platform. First not all of it is open source and this is (still) a one way street where Microsoft releases something and it then lives a life of its own without taking contributions back to the pieces of software distributed by Microsoft. So saying that Microsoft is open sourcing .NET is not completely true, saying that .NET is now open source is kind of misleading, but it doesn’t mean that this news has to be discarded as some pure publicity stunt from Redmond.

On the other hand, Microsoft has slowly started understanding that we’re in a new kind of market and developer communities play a major role. So going alone the way they did in the past will fail in the long term. The only way to attract more developers and make sure that you will not become obsolete 10 years from now is to open up and create an expanding community based on your products. So of course, every small step in opening .NET will be publicized as a huge step and Microsoft going Open Source but even if you get rid of all the marketing fog, I feel it’s still good news. It shows that the Open Source community has actually reached an important milestone in altering the way our industry works.

Xamarin and the Mono project it’s for sure a very welcome evolution. Just like the availability of Roslyn does open new perspectives for NRefactory and projects using NRefactory, the free availability of this code will hopefully allow many improvements in Mono and also help make its development even faster. And if it means that one day it makes Mono obsolete, as long as there is a multi-platform open alternative, I don’t see why it would be a bad thing.

So I’ll definitely have a look at the .NET Core CLR and the open source ASP.NET components on my Mac and on my Linux machines. And I do sure hope that this open sourcing of some .NET components will be followed by more and will lead to some great new projects. Of course Microsoft has an agenda of its own and open sourcing .NET components is not done just to make the world a better place. But does it really matter ? I’m always ready to complain when Microsoft is using its market strength to push crappy software to the world but whenever a change comes from their side and goes in the right direction, I feel we need to give them a chance.

Of course it’s great to do the right thing because it’s the right to do but doing the right thing because you expect to earn money out of it or just get some positive publicity is still better than not doing it or doing something wrong. So instead of complaining as some immediately did, let’s have a look at it and try to find out what new possibilities this opens.

JavaScript: Asynchronous abortable long running tasks

First, each browser window has a single thread it uses to render the page, process events and execute JavaScript code. So when programming in JavaScript, you’re pretty much stuck in a single threaded environment.

This might sound surprising considering that everybody is using AJAX nowadays, which is actually short for  Asynchronous JAvascript + Xml. And we all know that pages using AJAX actually let you work on the page while the AJAX calls are executed.

What happens is that the server calls run in the background but as soon as you process the results, you’re back in the single per window thread. This means that you will only be able to process the results of the AJAX call if this thread is currently not busy doing something else. And it also means that even though the callback is executed asynchronously, it will block the thread and if what you process in there takes too long, the UI will become unresponsive.

unresponsive javascript

This all works with a queue. Every piece of JavaScript code is executed in the single thread. If other pieces of code have to be executed during this time, they will be queued and executed once the thread is free. This affects the AJAX callbacks, event listeners and code scheduled for execution using settimeout.

So what do we do it we need to process large pieces of data or perform some CPU intensive processing and still do not want the UI to freeze ? There are basically 3 types of solutions.

Perform heavy processing on the server

Since you can call the server and do something else while the browser waits for a response, the obvious choice is to move as much of the heavy processing to the server. The programming language you use for your server-side logic is probably able to perform this processing just as well as JavaScript would anyway. This is the solution you should use in all scenarios where it is possible. Unfortunately there are a few classes of problems which cannot be solved by moving processing from the client to the server:

  1. You are in an environment where the server has limited resources and you want to take advantage of client resources.
  2. Moving data to the server for processing and back to the client is prohibitively expensive.
  3. The server cannot perform this kind of processing because you need access to something which is only available on the client.

The use case I just had is related to point 3. My heaving processing involved DOM manipulation, cloning many HTML elements on the fly and setting many listeners. This is typically something you cannot do on the server because it has no access to the DOM on the client.

Using Web Workers

Web Workers allow you to run scripts in background threads. They run in an isolated OS-level thread. Isolated means here that they can only communicate with our single thread over messages passing serialized objects.

In order to avoid concurrency problems, the Web Workers have no access to non-thread safe components or the DOM.

I’ll only show a simple example of how to work with dedicated Web Workers. For more information, just ask Google.

Creating a web worker is very easy:

var worker = new Worker('myscript.js');

Normally once the worker is created, you will start it by posting a message to it:

worker.postMessage();

postMessage() is the mechanism used to communicate from your code to the worker. You can use it to send data to your worker:

worker.postMessage('How are you?');

For the communication from the worker back to your code, you have to register a listener (callback) which will be called when the worker posts a message:

worker.addEventListener('message', function(e) {
    console.log('Got a message from the worker: ', e.data);
}, false);

From the worker side, it works the same way. The worker can trigger your listener by using postMessage and receive the messages you’ve sent by registering a listener.

Of course in most cases the data posted in the message will not be simple strings like “How are you?” but be structured JavaScript objects (using a Command pattern).

So this is all great, but why did I mention that there is a third possibility if Web Workers provide all I need ? Because in my scenario, it was not possible to use them. So when does it happen:

  1. First, Web Workers are relatively new and not supported in all Browsers out there: Internet Explorer 8 and 9 do not support them. Unfortunately there are still a few companies out there which haven’t moved to newer and better browsers. So in some intranet scenarios, using Web Workers might not be an option.
  2. Second, and this is the reason why I couldn’t use them, Web Workers work in an isolated way and cannot perform any DOM manipulation. If your heavy processing is exactly that, then they won’t help you.

Splitting your processing in batches

In cases where you can neither move the processing to the server nor use web workers, you’ll have to split your processing in small batches and push each of these batches to the processing queue. Each batch should be fast enough that it doesn’t block the single JavaScript thread for a noticeable time. This means that having a batch take 300 milliseconds or more is not an option. Actually I try to keep the processing time for a batch around 50 milliseconds.

In my scenario, I had two processing steps which were both long running. Gathering a list for the first step didn’t take long, so this could be done immediately and the list for the second step was determined by the first processing step.

So these are the steps to be performed:

  1. Fetch data with an AJAX call to the server
  2. Process the data in a first step and gather a list for the second step
  3. Process the list in a second step

So the code for the first step looks like this:

 

function startProcessing() {
    if (window.firstStepAjax != null) {
        window.firstStepAjax.abort();
    }
    window.firstStepAjax = $.ajax({
        type: "POST",
        url: "xxx",
    }).done(function(data) {
        window.firstStepAjax = null;
	window.firstStepInput = data.GetObjectResult;
	if (window.firstStepTimer != null) {
	    clearTimeout(window.firstStepTimer);
	}
	window.firstStepTimer = setTimeout(processFirstStep, 0);
    });
    return false;
}

First we abort any existing AJAX call. If there is already a call going on, it means that the user changed some parameters before the call finished and we can discard the previous one.
Then we make the AJAX call and register a callback. In the call back we reset the reference to the AJAX call and get the data. Next step is to trigger the first processing step asynchronously and before that cancel any step which might be ongoing. We use a timeout of 0 milliseconds to start the processing as soon as this queue entry is reached. processFirstStep is a function performing the first processing step:

function processFirstStep() {
    for (var i = 0; i < 5; i++) {
        var data = window.firstStepInput.shift();
        if (data != null) {
            //process this piece of data
            //and gather a list of data to be processed in window.secondStepInput
        }
    }
    if (window.firstStepInput.length == 0) {
        if (window.secondStepTimer != -1) {
            clearTimeout(window.secondStepTimer);
        }
        window.secondStepTimer = setTimeout(processSecondStep, 0);
        window.firstStepTimer = null;
    } else {
        window.firstStepTimer = setTimeout(processFirstStep, 0);
    }
    return false;
}

Since I know that it takes around 7 milliseconds to process one entry in window.firstStepInput, I process 5 entries at a time, so that I only block the single JavaScript thread for about 35 milliseconds every step. Once the 5 entries are processed (if less than 5 entries are available data will be null and nothing will happen), I check whether I’m done. If not, then the next batch is queued using setTimeOut. Otherwise, I stop a possible second step processing still running and trigger the second step processing.

The second step processing basically works the same way:

function processSecondStep() {
    for (var i = 0; i < 5; i++) {
        var data = window.secondStepInput.pop();
        if (data != null) {
            //do some processing
        }
    }
    if (window.secondStepInput.length == 0) {
        window.secondStepTimer = -1;
    } else {
        window.secondStepTimer = setTimeout(processSecondStep, 0);
    }
}

So using this mechanism, you can perform multiple steps of long running processing and abort each of them by using abort for AJAX calls and clearTimeout for functions.

Of course, you can only use this technique if you are able to split you processing in small enough batches that the JavaScript thread is only blocked for a short time.

Conclusion

So depending on what kind of scenario you are handling and what’s required for you processing, you’ll have to choose between the three methods shown above:

  1. Move the processing to the server if possible
  2. Use Web Workers if you do not need to support Internet Explorer 9 or lower and your processing does not require DOM manipulation.
  3. Use batches triggered by setTimeout if you can split your processing into small batches.

If none of them apply, well I guess you’re doing something wrong ;-)

Analyze your code using NDepend – Part 1

Whether you have to get on-board on an existing .NET project or you want to get an overview of your own project and use a more holistic approach to code improvement, you will quickly reach the limit of the tools provided by default in Visual Studio. Fortunately, Visual Studio (like most major development environment) does provide a good way to extend it and over time quite a few third-party tool vendors have started providing additional functionality for your favorite IDE.

One such tool is NDepend. The goal of NDepend is to first provide you with means to browse and understand your code using better a better visualization than what’s available by default in Visual Studio. But NDepend allows you to go further than that. Let’s have a look at how you can improve your code and manage code improvement using this tool.

Disclaimer: I was asked by the creator of NDepend whether I’d be interested in test driving it and sharing my experience on my blog. But the reason why I accepted writing this article is that NDepend has helped me improve the code quality in one of my project and I feel this is a tool with a lot of potential. I do not work for NDepend neither do I get any financial compensation from them (except for a free license for the tool).

Download and Installation

You can download a trial version of NDepend on their web site. The installation process is pretty straightforward. You download a ZIP file; extract it to some directory and start the installation executable. If you have a license for it, you just need to save the license file to the same directory where you’ve extracted the ZIP file.

During the installation process, the installer will identify which Visual Studio versions (it supports VS 2008 till VS2013) are installed and allow you to install NDepend as an add-in. One thing I really like was that I was able to install it and activate it (over the menu Tools | Add-in manager) with needing to restart Visual Studio.

Once you’ve activated NDepend, a new menu item will be available in your menu bar. A good place to start is the Dashboard.

Dashboard

The Dashboard shows you an overview of all relevant metrics related to your project as well as their evolution over time:

NDepend Dashboard

It shows some general project metrics like:

  • Number of lines of code
  • Number of types
  • Percentage of lines of comment
  • Max and average Method Complexity
  • Code Coverage
  • Usage of assemblies

But also some metrics related to coding rules. These code rules violations are clustered in critical and non-critical rules. All these metrics are available either as a view of the current status or as a graph showing you the evolution over time. Very often, especially when using NDepend on a project, which has already been around for some time, it is not possible to fix all violations at once and it’s important to be able to see whether we’re slowly reducing the number of violations or whether we have a negative dynamic and the number of violations are actually increasing.

All diagrams can be exported to HTML. This makes it easy to incorporate it in external reports about your project. If like me you’re stuck with an older version of Internet Explorer, you might get an almost empty display when looking at the exported web page. You then just have to open it in another browser. It’d be nice if it also worked in IE8 but let’s face it, web technologies keep evolving at a high pace and you can’t really expect anything to work in a browser, which is already more than 5 years old…

Code Rules

One of the most valuable features in NDepend is that all code rules are based on queries on a model. This means that you can adapt them, as you need i.e. in order to change a threshold or consider additional criteria. So you can adapt existing rules (the ones provided with NDepend) but also add your own rule groups and queries. Of course, that’s something you will only be able to use once you’ve invested enough time in learning how NDepends works. But modifying an existing rule is very easy.

Just as an example: There is a rule called “Methods too big”. It basically scans for methods with more than 30 lines of code. Let’s say, you define that in your project, it’s fine having methods of up to 40 lines of code. You can just click on one of the links in the “Code Rules” widget of the Dashboard:

NDepend Dashboard Code Rules

It will open the Queries and Rules Explorer. On the left hand side, you’ll see all rule groups:

NDepend Dashboard Code Rules Groups

There you can also create some groups or delete them. You also immediately see whether any of the rules in a group returned a warning or not. When you click on one of the groups, you’ll see all related queries on the right hand side:

NDepend Dashboard Code Rules queries

Queries can be activated and deactivated. And you can open the queries and rule editor by clicking on one of the queries.

The editor is a split pane, with the query on the top e.g. in the case of “Methods too big” you will see the following code:

// <Name>Methods too big</Name>
warnif count > 0 from m in JustMyCode.Methods where 
   m.NbLinesOfCode > 30
   // We've commented # IL Instructions, because with LINQ syntax, a few lines of code can compile to hundreds of IL instructions.
   // || m.NbILInstructions > 200
   orderby m.NbLinesOfCode descending,
           m.NbILInstructions descending
select new { m, m.NbLinesOfCode, m.NbILInstructions }

// Methods where NbLinesOfCode > 30 or NbILInstructions > 200
// are extremely complex and should be split in smaller methods.
// See the definition of the NbLinesOfCode metric here 
// http://www.ndepend.com/Metrics.aspx#NbLinesOfCode

And below you will see where a match was found. You will notice that when the violation occurs in an auto-property setter or getter, it is not possible to jump to this line of code by clicking on the link. When asked, the NDepend support answered that the problem is that the PDB file doesn’t provide this info and hence NDepend doesn’t know about the location. So though I do hope that they find a solution for this in the future, you can work around it by clicking on the type name and navigating to the appropriate location. Since none of us should have types with thousands of lines of code, it is no big deal, right? ;-)

All queries I’ve looked into seemed to be very well commented. This made it easy to understand what the rule is about and how to modify it to comply with one’s coding rules.

So in our example, in order to change the threshold for the “Methods too big” query, all you need to do is replace 30 by e.g. 40 and save. Additionally, you can press the “Critical” button to mark this rule as a deal breaker i.e. when called during a build, it will return an error code so that you can refuse to build the software if some critical violations are detected (of course I doubt you’ll use it with a “Method too big” violation).

The NDepend online documentation provides a lot of useful information about the query language. Since it’s based on the C# LINQ syntax, you’ll need to be comfortable with LINQ in order to start working with custom rules. But I guess most developers and architects working with C# are familiar with LINQ anyway…

Trend charts and Baselines

Except for the ability to customize and extend the NDepend rule set, another thing I found very useful is the ability to create and work with trend charts. The ability to create baselines for comparison is also a nice feature related to this topic.

Whenever you start working with static code analysis on a project which has already been around for quite some time, you end up getting a huge list of findings. It’s rarely the case that you can fix them all in the short term. What’s really important is to fix the critical violations and make sure that you first do not introduce more violations as your code evolves and also with every new version try and reduce the number of violations (starting with the most important ones).

Trend Charts

In order to create a trend chart, click on the “Create Trend Chart” button at the top of the Dashboard. The following dialog will appear:

NDepend Create Trend Chart

You can give your new trend chart a name choose which series will be defined and how they should look like. Once you save your new trend chart will be displayed in the dashboard.

A few useful trend charts are already displayed by default in the Dashboard:

  • Lines of Code
  • Rules Violated
  • Rules Violations
  • Percentage Coverage by Tests
  • Maximum complexity, lines of code, number of methods for a type, nesting depth…
  • Average complexity, lines of code, number of methods for a type, nesting depth…
  • Third-Party Usage

Using these trend charts, it’s dead easy to get an overview whether you’re going in the right direction or keep making your software more complex and error-

More about how I use NDepend to analyze and improve my code will come in a follow-up article…

 

Sybase ASE: Get one line for each value of a column

Let’s assume you have such a table:

CREATE TABLE benohead(SP1 int, SP2 int, SP3 int)

Column SP1 has non unique values and you want to keep only one row per unique SP1 value.

Assuming we have inserted the following values in the table:

INSERT INTO benohead VALUES(1,2,3)
INSERT INTO benohead VALUES(1,4,5)
INSERT INTO benohead VALUES(1,6,7)
INSERT INTO benohead VALUES(2,3,2)
INSERT INTO benohead VALUES(3,4,6)
INSERT INTO benohead VALUES(3,7,8)
INSERT INTO benohead VALUES(4,1,7)

It’d look like this:

SP1         SP2         SP3
----------- ----------- -----------
          1           2           3
          1           4           5
          1           6           7
          2           3           2
          3           4           6
          3           7           8
          4           1           7

Since SP2 and SP3 can have any value and you could also have rows where all 3 fields have the same value, it’s not so trivial to get a list looking like this:

SP1         SP2         SP3
----------- ----------- -----------
          1           6           7
          2           3           2
          3           7           8
          4           1           7

Even if the table is sorted, iterating through the rows and keeping track of the last SP1 you’ve seen will not help you since you cannot delete the second row because you do not have anything to identify it (like ROW_COUNT in Oracle).

One way to handle it is getting a list of unique SP1 values and their row count:

SELECT SP1, count(*) as rcount FROM benohead GROUP BY SP1

This will return something like this:

SP1         rcount
----------- -----------
          1           3
          2           1
          3           2
          4           1

You can then iterate through this and for each value of SP1 set a rowcount to rcount-1 and delete entries with that SP1 value. In the end, you’ll have one row per SP1 values. Of course, if you just need the data and do not want to actually clean up the table, you’ll have to do it on a copy of the table.

Instead of deleting, you can also iterate through the values of SP1 and fetch the top 1 row for this value:

SELECT TOP 1 SP1, SP2, SP3 FROM benohead WHERE SP1=1

If you had only one additional column (e.g. SP2), it’d be even easier, since you could just use MAX and GROUP BY:

SELECT SP1, MAX(SP2) AS SP2 FROM benohead GROUP BY SP1

which returns:

SP1         SP2
----------- -----------
          1           6
          2           3
          3           7
          4           1

Unfortunately this doesn’t scale to multiple columns. If you also have SP3, you cannot use MAX twice since you will then combinations which didn’t exist in the original table. Let’s insert an additional row:

INSERT INTO benohead VALUES(1,1,9)

The following statement:

SELECT SP1, MAX(SP2) AS SP2, MAX(SP3) AS SP3 FROM benohead GROUP BY SP1

will return:

 SP1         SP2         SP3
 ----------- ----------- -----------
           1           6           9
           2           3           2
           3           7           8
           4           1           7

Although we had no row with SP1=1, SP2=6 and SP3=9.

So if you don’t like the solution iterating and delete with rowcount, you’ll need to introduce a way to uniquely identify each row: an identity column.

You can add an identity column to the table:

ALTER TABLE benohead ADD ID int identity

And them select the required rows like this:

SELECT * from benohead b WHERE b.ID = (SELECT MAX(ID) FROM benohead b2 WHERE b.SP1=b2.SP1)

This will fetch for each value of SP1 the row with the highest ID.

Or you can create a temporary table with an indentity column:

SELECT ID=identity(1), SP1, SP2, SP3 INTO #benohead FROM benohead	

And then use a similar statement on the temporary table.

 

JavaScript: Tracing performance to the console

I wanted to trace the performance of Ajax call in a project I’m working on. The obvious place to store this kind of information is the developer console. If your browser supports debugging, you can writen to the console object and display it in the browser. You should be able to open it in most browsers by pressing F12 or Ctrl-Shift-I.

Elapsed time

You’ve probably already used console.log or console.debug but the console global JavaScript object does provide many more functions. Two of them which are useful in our use case are:

  1. console.time()
  2. console.timeEnd()

They both take a timer name and when time() is called a timer with this name will be started. When timeEnd() is called the specified timer is stopped and the elapsed time since its start in milliseconds is logged to the console at the Info level. Here is how the ouput looks like in Firefox:

console time

Your code would look like this:

console.time("refreshGrid");
// refresh the grid
console.timeEnd("refreshGrid");
console.time("fillGrid");
// fill the grid
console.timeEnd("fillGrid");

You can also have an outer timer logging the total time an operation lasts and some inner timers logging the time for individual steps:

console.time("total");

console.time("refreshGrid");
// refresh the grid
console.timeEnd("refreshGrid");

console.time("fillGrid");
// fill the grid
console.timeEnd("fillGrid");

console.timeEnd("total");

Timestamps

Another function of the console global JavaScript object which you could use is console.timeStamp() e.g.:

console.timeStamp("started");
// fill the grid
console.timeStamp("ended");

This will output the following to the console:

console timeStamp

If you are using Firefox, please note that the Firefox Web Console doesn’t support the console.timeStamp(). But Firebug does. So if you want to use it, you’ll need to install Firebug which is an awesome plugin anyway.

Check for existence

Another thing you have to keep in mind is that although these console functions are pretty well supported by most popular browsers they do not seem to be part of any specification and you might find some browsers (e.g. mobile browser) which do not support them. Calling these functions in such a browser will cause you JavaScript code to fail. So you should either make sure that you only use this in your development environment but do not push it to you production site or at least check whether the console object and its functions exist:

if (console && console.time) { console.time("test"); }
if (console && console.timeEnd) { console.timeEnd("test"); }
if (console && console.timeStamp) { console.timeStamp("test"); }

Internet Explorer 8 doesn’t support the console global JavaScript object. Internet Explorer 11 does. It also looks like Internet Explorer 10 also supports this but I couldn’t check it.

Network calls

If all you’re after is to check how long your Ajax calls to the server last, you can just use the network call analyzer available in the developer tools of your favorite browser.

In Chrome it looks like this:

Network tab Chrome

In Firebug (Firefox plugin):

Network tab Firebug

Note that in Firefox you can get the same information without Firebug, in the Web Console:

Network tab Firefox

It’s still a good idea to use Firebug which is a great plugin. But you don’t need it if all you’re after is this type of information.

 

Generate icons for checkboxes

Instead of using this square check boxes you can also set your own images using the three following methods of JCheckBox:

  • setIcon to set the default icon.
  • setSelectedIcon to set the icon displayed when the box is checked.
  • setDisabledIcon to set the icon used when the box is disabled.

Now I mostly need the same image as the default image for selected and disabled icon with either an outline when it’s selected or grayed out if disabled.
So that I don’t have to set those three icons manually each time, I’ve subclassed JCheckBox so that the setIcon sets all three icons:

public void setIcon(Icon defaultIcon) {
	super.setIcon(defaultIcon);
	// New icon should have the same size
	int height = defaultIcon.getIconHeight();
	int width = defaultIcon.getIconWidth();
	// Get an image with the icon
	BufferedImage image = new BufferedImage(width, height,
			BufferedImage.TYPE_INT_ARGB);
	Graphics2D g2 = image.createGraphics();
	// First paint the icon
	defaultIcon.paintIcon(this, g2, 0, 0);
	// Buffer for the new image
	BufferedImage selected = new BufferedImage(width, height,
			BufferedImage.TYPE_INT_ARGB);
	g2 = selected.createGraphics();
	// Draw the original icon
	g2.drawImage(image, null, 0, 0);
	// Create the stroke for the outline
	g2.setColor(UIManager.getColor("CheckBox.outlineColor"));
	int strokeSize = (int) (.15 * width);
	g2.setStroke(new BasicStroke(strokeSize, BasicStroke.CAP_ROUND,
			BasicStroke.JOIN_ROUND));
	// Then draw the outline
	g2.drawRoundRect(0, 0, width, height, height, height);
	// And create an ImageIcon to use as selected icon
	setSelectedIcon(new ImageIcon(selected));
	// For the disabled icon we just apply a gray filter
	ImageFilter filter = new GrayFilter(false, 0);
	// Apply the filter to the original image
	Image disabled = createImage(new FilteredImageSource(image.getSource(),
			filter));
	// And create an ImageIcon to use as disabled icon
	setDisabledIcon(new ImageIcon(disabled));
}

Now I just need to set one of them and the others are created automatically. The result looks like this:
checkboxes
Of course the outline shouldn’t be so thick and the icon already should provide space for the outline.

Java: Vertical Label in Swing

This is an article I’ve published over 8 years ago on jroller. I’ve now republished it on my current blog because I was asked to clarify the license terms (see next paragraph) and I found it better to do it here than to update a post on a blog I’m not maintaining anymore.

License clarification: You can use the code of any article on this blog as you wish. I’ve just published this code in the hope to be helpful and do not expect anything in return. So from my point of view you can consider it being released under the WTFPL license terms.

Today, let’s see how to implement a vertical label in Swing. This component should extend JLabel and provide the possibility to rotate 90° to the right or to the left.

Like normal JLabel, it should work whether it contains an icon, a text or both.

Here’s a screenshot of how it should look like:

vertical label

The label with a rotation to the left, no rotation and a rotation to the right.

So that the label painted according to the current look and feel, I didn’t want to paint it myself but to delegate it and rotate it.

In order to do this, I had to rotate and translate the graphics object. The remaining problem is that the UI delegate uses some methods of the label to get the size of the component and the insets. So we have to trick the UI delegate into thinking that the component is horizontal and not vertical.

  public Insets getInsets(Insets insets) {
        insets = super.getInsets(insets);
        if (painting) {
            if (rotation == ROTATE_LEFT) {
                int temp = insets.bottom;
                insets.bottom = insets.left;
                insets.left = insets.top;
                insets.top = insets.right;
                insets.right = temp;
            }
            else if (rotation == ROTATE_RIGHT) {
                int temp = insets.bottom;
                insets.bottom = insets.right;
                insets.right = insets.top;
                insets.top = insets.left;
                insets.left = temp;
            }
        }
        return insets;
    }
    public Insets getInsets() {
        Insets insets = super.getInsets();
        if (painting) {
            if (rotation == ROTATE_LEFT) {
                int temp = insets.bottom;
                insets.bottom = insets.left;
                insets.left = insets.top;
                insets.top = insets.right;
                insets.right = temp;
            }
            else if (rotation == ROTATE_RIGHT) {
                int temp = insets.bottom;
                insets.bottom = insets.right;
                insets.right = insets.top;
                insets.top = insets.left;
                insets.left = temp;
            }
        }
        return insets;
    }
    public int getWidth() {
        if ((painting) && (isRotated()))
            return super.getHeight();
        return super.getWidth();
    }
    public int getHeight() {
        if ((painting) && (isRotated()))
            return super.getWidth();
        return super.getHeight();
    }

The painting variable is set in the paintComponent method just before calling the method of the super class:

protected void paintComponent(Graphics g) {
	Graphics2D g2d = (Graphics2D) g;

	if (isRotated())
		g2d.rotate(Math.toRadians(90 * rotation));
	if (rotation == ROTATE_RIGHT)
		g2d.translate(0, -this.getWidth());
	else if (rotation == ROTATE_LEFT)
		g2d.translate(-this.getHeight(), 0);
	painting = true;

	super.paintComponent(g2d);

	painting = false;
	if (isRotated())
		g2d.rotate(-Math.toRadians(90 * rotation));
	if (rotation == ROTATE_RIGHT)
		g2d.translate(-this.getWidth(), 0);
	else if (rotation == ROTATE_LEFT)
		g2d.translate(0, -this.getHeight());
}

Now one remaining problem is that a normal label uses the icon and the text to compute it’s preferred, minimum and maximum size assuming it’s layed out horizontally. Now the layour managers use this methods to layout the components. So we need to return sizes based on a vertical layout. Since we do not want to compute the sizes ourselves, we have to let the super class compute it and switch height and width (when there’s a rotation):

public Dimension getPreferredSize() {
	Dimension d = super.getPreferredSize();
	if (isRotated()) {
		int width = d.width;
		d.width = d.height;
		d.height = width;
	}
	return d;
}

public Dimension getMinimumSize() {
	Dimension d = super.getMinimumSize();
	if (isRotated()) {
		int width = d.width;
		d.width = d.height;
		d.height = width;
	}
	return d;
}

public Dimension getMaximumSize() {
	Dimension d = super.getMaximumSize();
	if (isRotated()) {
		int width = d.width;
		d.width = d.height + 10;
		d.height = width + 10;
	}
	return d;
}

That’s it, now our component behaves exactly like a JLabel except that it can be rotated to the left or to the right. The whole source code here for your reference:

package org.jroller.henribenoit.swing;

import java.awt.Dimension;
import java.awt.FlowLayout;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Insets;

import javax.swing.Icon;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.UIManager;

public class VerticalLabel extends JLabel {
    public final static int ROTATE_RIGHT = 1;

    public final static int DONT_ROTATE = 0;

    public final static int ROTATE_LEFT = -1;

    private int rotation = DONT_ROTATE;

    private boolean painting = false;

    public VerticalLabel() {
        super();
    }

    public VerticalLabel(Icon image, int horizontalAlignment) {
        super(image, horizontalAlignment);
    }

    public VerticalLabel(Icon image) {
        super(image);
    }

    public VerticalLabel(String text, Icon icon, int horizontalAlignment) {
        super(text, icon, horizontalAlignment);
    }

    public VerticalLabel(String text, int horizontalAlignment) {
        super(text, horizontalAlignment);
    }

    public VerticalLabel(String text) {
        super(text);
    }

    public int getRotation() {
        return rotation;
    }

    public void setRotation(int rotation) {
        this.rotation = rotation;
    }

    public boolean isRotated() {
        return rotation != DONT_ROTATE;
    }

    protected void paintComponent(Graphics g) {
        Graphics2D g2d = (Graphics2D) g;

        if (isRotated())
            g2d.rotate(Math.toRadians(90 * rotation));
        if (rotation == ROTATE_RIGHT)
            g2d.translate(0, -this.getWidth());
        else if (rotation == ROTATE_LEFT)
            g2d.translate(-this.getHeight(), 0);
        painting = true;

        super.paintComponent(g2d);

        painting = false;
        if (isRotated())
            g2d.rotate(-Math.toRadians(90 * rotation));
        if (rotation == ROTATE_RIGHT)
            g2d.translate(-this.getWidth(), 0);
        else if (rotation == ROTATE_LEFT)
            g2d.translate(0, -this.getHeight());
    }

    public Insets getInsets(Insets insets) {
        insets = super.getInsets(insets);
        if (painting) {
            if (rotation == ROTATE_LEFT) {
                int temp = insets.bottom;
                insets.bottom = insets.left;
                insets.left = insets.top;
                insets.top = insets.right;
                insets.right = temp;
            }
            else if (rotation == ROTATE_RIGHT) {
                int temp = insets.bottom;
                insets.bottom = insets.right;
                insets.right = insets.top;
                insets.top = insets.left;
                insets.left = temp;
            }
        }
        return insets;
    }

    public Insets getInsets() {
        Insets insets = super.getInsets();
        if (painting) {
            if (rotation == ROTATE_LEFT) {
                int temp = insets.bottom;
                insets.bottom = insets.left;
                insets.left = insets.top;
                insets.top = insets.right;
                insets.right = temp;
            }
            else if (rotation == ROTATE_RIGHT) {
                int temp = insets.bottom;
                insets.bottom = insets.right;
                insets.right = insets.top;
                insets.top = insets.left;
                insets.left = temp;
            }
        }
        return insets;
    }

    public int getWidth() {
        if ((painting) && (isRotated()))
            return super.getHeight();
        return super.getWidth();
    }

    public int getHeight() {
        if ((painting) && (isRotated()))
            return super.getWidth();
        return super.getHeight();
    }

    public Dimension getPreferredSize() {
        Dimension d = super.getPreferredSize();
        if (isRotated()) {
            int width = d.width;
            d.width = d.height;
            d.height = width;
        }
        return d;
    }

    public Dimension getMinimumSize() {
        Dimension d = super.getMinimumSize();
        if (isRotated()) {
            int width = d.width;
            d.width = d.height;
            d.height = width;
        }
        return d;
    }

    public Dimension getMaximumSize() {
        Dimension d = super.getMaximumSize();
        if (isRotated()) {
            int width = d.width;
            d.width = d.height + 10;
            d.height = width + 10;
        }
        return d;
    }

    /**
     * @param args
     */
    public static void main(String[] args) {
        try {
            UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
        }
        catch (Exception e) {
            e.printStackTrace();
        }
        final JFrame frame = new JFrame();
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.getContentPane().setLayout(new FlowLayout());
        VerticalLabel label = new VerticalLabel("Testing something");
        VerticalLabel label2 = new VerticalLabel("Testing something");
        VerticalLabel label3 = new VerticalLabel("Testing something");
        label.setIcon(new ImageIcon("shortcut.png"));
        label2.setIcon(new ImageIcon("shortcut.png"));
        label3.setIcon(new ImageIcon("shortcut.png"));
        label.setRotation(VerticalLabel.ROTATE_LEFT);
        label2.setRotation(VerticalLabel.DONT_ROTATE);
        label3.setRotation(VerticalLabel.ROTATE_RIGHT);
        frame.getContentPane().add(label);
        frame.getContentPane().add(label2);
        frame.getContentPane().add(label3);
        frame.pack();
        frame.setVisible(true);
    }
}

 

Sybase ASE Cookbook Update

I’ve written this cookbook about 10 months ago. It basically contains all the information about Sybase ASE I’ve documented on my blog over the years. I use it when I need offline access to some tricks (since I’m not getting younger, it’s sometimes useful to have a kind of brain dump somewhere). I also compiled it and published it here in the hope that someone else might find it useful.

I’ve just updated the cookbook with a few new things. But it’s really a small update.

This cookbook is still available for free. And I am still no professional writer and still cannot afford paying someone for proof-read it. So if you notice any mistakes, explanations which cannot be understood or anything like this, please leave a comment here or contact me at henri.benoit@gmail.com. I can’t guarantee how fast I can fix mistakes but I’ll do my best to do it in a timely manner.

Benohead Sybase ASE Cookbook