Powershell: Check whether a file is locked

When opening a StreamReader for a file in powershell, you will get an exception if the file is locked:

The process cannot access the file ‘xxx’ because it is being used by another process.

You can use this to check whether a file is locked. You just need to setup a trap which will say that the file is locked if an exception occurs and then open the file:

trap {
	Write-Output "$filename is locked"
	continue
}
$stream = New-Object system.IO.StreamReader $filename
if ($stream) {$stream.Close()}

Of course, you might also get an exception if the file doesn’t exist, so non-existing files will be reported as locked. So before checking the file, we need to make sure the file exists:

$file = gi (Resolve-Path $filename) -Force
if ($file -is [IO.FileInfo]) {
	trap {
		Write-Output "$filename is locked"
		continue
	}
	$stream = New-Object system.IO.StreamReader $file
	if ($stream) {$stream.Close()}
}

So if the file doesn’t exist, you will get an exception before we define the trap and the trap will only be activated if you cannot open the file. Here’s the output when the file doesn’t exist:

Resolve-Path : Cannot find path ‘D:\Temp\AuditAnalysis\test.xlsx2′ because it does not exist.
At D:\Temp\AuditAnalysis\test_lock.ps1:13 char:25
+ $file = gi (Resolve-Path <<<< $filename) -Force
+ CategoryInfo : ObjectNotFound: (D:\Temp\AuditAnalysis\test.xlsx2:String) [Resolve-Path], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.ResolvePathCommand

Get-Item : Cannot bind argument to parameter ‘Path’ because it is null.
At D:\Temp\AuditAnalysis\test_lock.ps1:13 char:11
+ $file = gi <<<< (Resolve-Path $filename) -Force
+ CategoryInfo : InvalidData: (:) [Get-Item], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.GetItemCommand

Here’s the full code with parameter handling:

############################################################
#                      Test file lock
#                      by Henri Benoit
############################################################

Param(
    [parameter(Mandatory=$true)]
    $filename
)

Write-Output "Checking lock on file: $filename"

$file = gi (Resolve-Path $filename) -Force
if ($file -is [IO.FileInfo]) {
	trap {
		Write-Output "$filename is locked"
		continue
	}
	$stream = New-Object system.IO.StreamReader $file
	if ($stream) {$stream.Close()}
}
exit

If the file is locked it will display the following:

Checking lock on file: test.xlsx
test.xlsx is locked

If it is not locked, only the first line will be displayed:

Checking lock on file: test.xlsx

If you want to check whether a file is locked within your powershell code, you can define this in a function and instead of writing something to the output, you can use Set-Variable to set a variable to true and return this variable.

If you actually do not care whether the file is locked or not but just want to read the contents of the file no matter what, you could use this to check whether the file is locked and if yes copy the file before reading it.

Of course you might want to also define an additional function checking whether the file exists by trapping the error in Resolve-Path.

PHP: composer install or update causes a PHP Fatal error

When running composer install or update I was getting a PHP fatal error because PHP was using more than 512MB:

$ composer install
Loading composer repositories with package information
Installing dependencies (including require-dev)
PHP Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 72 bytes) in phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php on line 123
PHP Stack trace:
PHP 1. {main}() /usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar:0
PHP 2. require() /usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar:14
PHP 3. Composer\Console\Application->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/bin/composer:43
PHP 4. Symfony\Component\Console\Application->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Console/Application.php:83
PHP 5. Composer\Console\Application->doRun() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:121
PHP 6. Symfony\Component\Console\Application->doRun() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Console/Application.php:117
PHP 7. Symfony\Component\Console\Application->doRunCommand() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:191
PHP 8. Symfony\Component\Console\Command\Command->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:881
PHP 9. Composer\Command\InstallCommand->execute() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Command/Command.php:241
PHP 10. Composer\Installer->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Command/InstallCommand.php:110
PHP 11. Composer\Installer->doInstall() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Installer.php:210
PHP 12. Composer\DependencyResolver\Solver->solve() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Installer.php:449
PHP 13. Composer\DependencyResolver\RuleSetGenerator->getRulesFor() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/Solver.php:166
PHP 14. Composer\DependencyResolver\RuleSetGenerator->addRulesForUpdatePackages() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:275
PHP 15. Composer\DependencyResolver\RuleSetGenerator->addRulesForPackage() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:235
PHP 16. Composer\DependencyResolver\RuleSetGenerator->createConflictRule() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:204

Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 72 bytes) in phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php on line 123

Call Stack:
0.0100 385808 1. {main}() /usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar:0
0.0103 387560 2. require(‘phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/bin/composer’) /usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar:14
0.0308 2886688 3. Composer\Console\Application->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/bin/composer:43
0.0334 3185464 4. Symfony\Component\Console\Application->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Console/Application.php:83
0.0345 3310400 5. Composer\Console\Application->doRun() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:121
0.0354 3389704 6. Symfony\Component\Console\Application->doRun() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Console/Application.php:117
0.0355 3390560 7. Symfony\Component\Console\Application->doRunCommand() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:191
0.0356 3390896 8. Symfony\Component\Console\Command\Command->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:881
0.0360 3394928 9. Composer\Command\InstallCommand->execute() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/vendor/symfony/console/Symfony/Component/Console/Command/Command.php:241
0.1072 6425464 10. Composer\Installer->run() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Command/InstallCommand.php:110
0.1081 6525856 11. Composer\Installer->doInstall() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Installer.php:210
3.8979 41690568 12. Composer\DependencyResolver\Solver->solve() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/Installer.php:449
3.8985 41756784 13. Composer\DependencyResolver\RuleSetGenerator->getRulesFor() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/Solver.php:166
3.9001 41901248 14. Composer\DependencyResolver\RuleSetGenerator->addRulesForUpdatePackages() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:275
3.9001 41903408 15. Composer\DependencyResolver\RuleSetGenerator->addRulesForPackage() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:235
104.6757 536552128 16. Composer\DependencyResolver\RuleSetGenerator->createConflictRule() phar:///usr/local/Cellar/composer/1.0.0-alpha8/libexec/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php:204

This wasn’t caused by an old version of PHP as I was using PHP 5.5.19:

$ php –version
PHP 5.5.19 (cli) (built: Nov 23 2014 20:46:27)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
with Xdebug v2.2.6, Copyright (c) 2002-2014, by Derick Rethans

So increasing the maximum memory limit in php.ini would probably solve it but more than 512MB just for a composer install seemed a little bit excessive. So since my PHP install had a current version, I checked composer:

$ composer –version
Composer version 1.0.0-alpha8 2014-01-06 18:39:59

OK, it wasn’t that old, but I checked anyway whether there was a newer version and found one:

$ composer selfupdate
Updating to version 029f7093009f621bd20aef98c7bfc61631f18cf1.
Downloading: 100%
Use composer self-update –rollback to return to version 1.0.0-alpha8

After updating to alpha9, I was able to run composer install !! Sometimes fixing your problem is as easy as making sure you use the latest version of all involved software !

It looks like the memory usage of composer was improved in the latest update. But since composer is still using a lot of memory (depending on the packages you have configured), when using it on your development machine, you should set a high memory limit in php.ini and when deploying to your production server it is best to use a composer.lock especially if you’re on shared hosting.

So you should allocate a lot of memory to PHP on your development machine, only run composer update there and only run composer install with a composer.lock file on your deployment machine.

Also, I have tried it but it looks like it helped some people to disable xdebug.

JavaScript: variables in asynchronous callback functions

What’s the problem?

First let’s assume you have declared a variable called myVar:

var myVar;

If you immediately log the contents of the variable, it will return undefined.

Now if you define any of the following:

setTimeout(function() { myVar = "some value"; }, 0);

or:

$.ajax({
	url: '...',
	success: function(response) {
		myVar = response;
	}
});

or:

var script = document.createElement("script");
script.type = "text/javascript";
script.src = "myscript.js";
script.onload = function(){
	myVar = "some value";
};
document.body.appendChild(script);

and immediately afterwards display the contents of myVar, you will also get undefined as result.

Why is the variable not set?

In all examples above, the function defined are what’s called asynchronous callbacks. This means that they are immediately created but not immediately executed. setTimeout pushes it into the event queue, the AJAX calls will execute it once the call returns and onload will be executed when the DOM element is loaded.

In all three cases, the function does not execute right away but rather whenever it is triggered asynchronously. So when you log or display the contents of the variable, the function has not yet run and the content has not yet been set.

So even though JavaScript actually works with a single-thread model, you’ve landed in the asynchronous problem zone.

Synchronous vs. Asynchronous

Basically the difference between a synchronous function call and an asynchronous one can be shown with these 2 pieces of code:

var myVar;

function doSomething() {
	myVar = "some value";
}

//synchronous call
doSomething();

log(myVar);

And:

var myVar;

function doSomething() {
	myVar = "some value";
}

log(myVar);

//asynchronous call
doSomething();

In the asynchronous case, our function is also defined before logging the variable but is call some time later.

How to make it work?

You will need to rewrite your code in order to use callbacks which will be called when the processing (in this case setting the value of myVar) is done.

First example: Instead of using the following:

var myVar;

setTimeout(function() { 
    myVar = "some value"; 
}, 0);

alert(myVar);

You should rather do the following:

var myVar;

function callback() {
    alert(myVar);
}

setTimeout(function() { 
    myVar = "some value"; 
    callback();
}, 0);

Instead of this:

var myVar;

var script = document.createElement("script");
script.type = "text/javascript";
script.src = "d3.min.js";
script.onload = function(){
	myVar = "some value";
};
document.body.appendChild(script);

alert(myVar);

Use this:

function callback(response) {
    alert("loaded");
}

var script = document.createElement("script");
script.type = "text/javascript";
script.src = "d3.min.js";
script.onload = callback;
document.body.appendChild(script);

And instead of doing the following:

var myVar;

$.ajax({
	url: '...',
	success: function(response) {
		myVar = response;
	}
});

alert(myVar);

Do the following:

function callback(response) {
    alert(response);
}

$.ajax({
	url: '...',
	success: callback
});

Of course, here another solution is to make the AJAX call synchronous by adding the following to the options:

async: false

Also if you use $.get or $.post, you’ll need to rewrite it to use $.ajax instead.

But this is not a good idea. First if the call lasts longer than expected, you will block the UI making your browser window unresponsive. And at some point in the time the browser will ask your user whether you want to stop the unresponsive script. So, even though programing everything in an asynchronous way with callbacks is not always trivial, doing everything in a synchronous way will cause more sporadic and difficult to handle issues.

So to summarize, you should either use a callback or directly call a function after processing and not only rely on setting a variable in the asynchronous part.

Debian/Ubuntu: The following packages have been kept back

Note that for all commands below, you might need to add a leading sudo depending on your system.

When updating your packages using apt-get, you might get the following message:

# apt-get upgrade
Building Dependency Tree… Done
The following packages have been kept back:
xxxxxx xxxxxx xxxxxx xxxxxx
0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.

This means that there are packages which can be updated but they were not. So what are the reasons why some packages wouldn’t be updated ?

First let’s look at what apt-get upgrade does:

apt-get upgrade installs the newest versions of all packages which are already installed on the system from the configured sources. What apt-get upgrade doesn’t do is install additional packages or remove existing ones. The fact that it doesn’t install new packages is the reason why your packages are being kept back. These packages have an updated list of dependencies and apt-get would need to install new packages in order to install the new version of the software.

So what can you do about it ? There are basically 3 ways to handle it.

dist-upgrade

The first one is to use dist-upgrade. apt-get dist-upgrade does basically the same as upgrade. But additionally, it removes and installs packages in order to accommodate the updated list of dependencies of newer versions of the install packages.

When using diet-upgrade, you will be presented with a list of changes which will be applied. There will be a list of packages being additionally installed and a list of packages which will be removed.

You have to especially pay attention to the list of packages to be removed. In the list of packages to be removed, you will not only find packages being removed because they are not needed anymore but also packages which will be removed because they are not compatible with the latest version of the packages you are updating.

On my system, this is very often the case for Plesk which tends to force me to keep outdated packages for a little longer. Forcing an update of these packages would result in Plesk being uninstalled. Recovering from this is not very easy… Also, if you install beta versions of some software, you might also end up getting long list of packages to be removed because they would not be compatible anymore.

Also, you might in some cases have packages required by others but not cleanly listed as a dependency (once had this problem with subversion).

install

If you want to have the required packages installed but do not want to have packages removed, you can use apt install:

apt-get install <list of packages>

This will first resolve the kept-back dependencies and will offer to install additionally required packages.

aptitude

Start aptitude, select the list of upgradable packages, press “g” twice to install.

Then answer the questions and follow the instructions.

Still problems?

If all of this doesn’t work, you may want to give some of the following a try. These are various things I’ve needed in some cases to get rid of conflicts using apt-get.

Reinstalling the packages kept back:

apt-get install --reinstall <list of packages>

Checking for a given package, the installed and the versions available from the configured sources:

apt-cache policy <package name>

Use aptitude instead of apt-get:

aptitude update && aptitude upgrade

Use aptitude safe-upgrade to upgrade currently installed packages as well as install new packages to resolve new dependencies, but without removing installed packages:

aptitude safe-upgrade

 

JavaScript: Workaround for Object.defineProperty in IE8

Even though Internet Explorer 8 is now very old and outdated, there are unfortunately still many corporate users who are still stuck with this old browser. Many things are broken in IE8. One of them is the implementation of Object.defineProperty.

I noticed it when using the polyfill for String.startswith you can find on MDN looks. It looks like this:

if (!String.prototype.startsWith) {
  Object.defineProperty(String.prototype, 'startsWith', {
    enumerable: false,
    configurable: false,
    writable: false,
    value: function(searchString, position) {
      position = position || 0;
      return this.lastIndexOf(searchString, position) === position;
    }
  });
}

While Object.defineProperty is fully supported in IE9, it’s only partially supported in IE8. Internet Explorer 8 only supports defineProperty for DOM objects. If you use it for anything else, IE will complain that there is nothing like Object.defineProperty.

This is especially bad since calling defineProperty will cause and error which will prevent the rest of your JavaScript code to run.

So what can we do about it ?

The first thing I tried was:

        if (defineProperty){
            Object.defineProperty(String.prototype, 'startsWith', {
                enumerable: false,
                configurable: false,
                writable: false,
                value: function (searchString, position) {
                    position = position || 0;
                    return this.indexOf(searchString, position) === position;
                }
            });
        }

Unfortunately I then got an error in the if. So the easiest way to do this turned out to be to try to execute the defineProperty and ignore exceptions:

if (!String.prototype.startsWith) {
    try{ 
        if (defineProperty){
            Object.defineProperty(String.prototype, 'startsWith', {
                enumerable: false,
                configurable: false,
                writable: false,
                value: function (searchString, position) {
                    position = position || 0;
                    return this.indexOf(searchString, position) === position;
                }
            });
        }
    } catch (e) { };
}

After that, you can check again whether startsWith exists and add this function:

if (!String.prototype.startsWith) {
    String.prototype.startsWith = function(searchString, position) {
        position = position || 0;
        return this.indexOf(searchString, position) === position;
    };
}

 

‘console’ is undefined

As shown in this post logging to the console can be very useful when writing JavaScript code. Unfortunately, the JavaScript console is not evenly supported by all browsers. You first have browsers which don’t know it at all (like IE8) and will throw the error which I use as the title of this post. And then even two browsers which do have a console object do not all implement the same functions.

Here’s a short overview of what’s supported by which browser (note that if you do not have the latest version of a specific browser, this particular version may not support all listed functions):

Function Firefox Chrome Internet Explorer Safari Opera
assert X  X  X
clear X  X
count X X  X  X  X
debug X X  X  X  X
dir X X  X  X
dirxml X  X  X  X
error X X  X  X
_exception X
group X X  X  X  X
groupCollapsed X X  X  X  X
groupEnd X X  X  X  X
info X X  X  X
log X X  X  X
markTimeline  X
profile X X  X
profileEnd X X  X
table X
time X X  X  X  X
timeEnd X X  X  X  X
timeline X
timelineEnd X
timeStamp X
trace X  X  X  X
warn X X  X  X

So even though there are quite a few functions supported by all browsers, you should still check whether the console exists and whether the particular function you want to use exists. But adding before each call to a console function a check for the console and for the function quickly becomes a pain.

A way to work around it is to:

  1. Create a dummy console object if console is not defined
  2. Create dummy functions for all functions not supported by the console object in this particular browser

The dummy console object is created by this code:

if (!window.console) {
        window.console = {};
}

So simple, if the global console variable doesn’t exist, create one as an empty object. Now we’ve got either a real console object or a dummy object, we got rid of the error message saying that console is undefined. But we’ll still get errors when our code calls undefined functions. So let’s go through all functions and for each of them create a dummy function if missing:

(function() {
	var funcList = ["assert", "clear", "count", 
		"debug", "dir", "dirxml", "error", 
		"_exception", "group", "groupCollapsed", 
		"groupEnd", "info", "log", "markTimeline", 
		"profile", "profileEnd", "table", "time", 
		"timeEnd", "timeline", "timelineEnd", 
		"timeStamp", "trace", "warn"];

	var funcName;

	for (var i=0; i < funcList .length; i++) {
		funcName = funcList [i];
		if (!window.console[funcName]) {
			window.console[funcName] = function() {};
		}
	}
})()

Here a few explanations how this works. First wrapping it all in (function() { … })() is used to scope the variables we define in there, so that we do not pollute the global scope.

Then we define an array containing all known console functions and iterate through them. For each function, we check whether the console object (the real one or the dummy one) have such a function defined. If not, we assign an empty function. This is done by using the fact that functions of an object are just special properties of the object and that object properties are indexed.

So, using this, you will get rid of all errors related to the console being undefined or console functions being undefined, but of course since we add empty implementations, using these empty implementations will still have no effect.

Of course, instead of using empty implementations you could log the calls to console functions somewhere (e.g. an array) so that you can access it. If I ever need it, I might actually write some code for it and update this post.

Chrome: Resource interpreted as Font but transferred with MIME type font/x-woff

When loading one of my website in Chrome, I noticed the following error message in the JavaScript console:

Resource interpreted as Font but transferred with MIME type font/x-woff: “https://xxx.xxx/xxx.woff”.

Actually it looks like it took too long to define and official MIME type for WOFF fonts and a few different MIME types have been used until the official one was defined:

  • font/x-woff
  • application/x-font-woff
  • application/font-woff – This is actually the official MIME type for WOFF fonts
  • font/woff

By default IIS7 will not know what to do with the WOFF font file, so you will get a 404 error when fetching it.

In my case I was using some third party files which defined the following in order to solve the 404 error:

  <system.webServer>
    <staticContent>
      <remove fileExtension=".woff" />
      <mimeMap fileExtension=".woff" mimeType="font/x-woff" />
    </staticContent>
  </system.webServer>

But this MIME type is now not in use anymore. So I tried the following:

  <system.webServer>
    <staticContent>
      <remove fileExtension=".woff" />
      <mimeMap fileExtension=".woff" mimeType="application/font-woff" />
    </staticContent>
  </system.webServer>

And the error in the console was gone. Actually using application/x-font-woff instead of application/font-woff also works. This is probably because for a very long time, Chrome didn’t support the new MIME type and really expected application/x-font-woff.

If you’re using Apache as a Web Server, you will need to add the following directive to your .htaccess file:

AddType application/font-woff .woff

 

The exception listener on JMS connections is never called

Since we were having a few issues with memory leaks while handling JMS connections, we wanted to setup am Exception Listener (using the setExceptionListener method of the connection) to handle a connection failure and reconnect. Especially, if you consume messages asynchronously, this seems like a good way to learn that your connection has failed and reconnect.

Unfortunately, we never were notified although we could clearly see that the connection failed.

The problem is that we were in a JBoss container and as the J2EE specification says:

This method must not be used in a Java EE web or EJB application. Doing so may cause a JMSException to be thrown though this is not guaranteed.

This is actually true not only for setExceptionListener but for all the following methods of the JMS Connection class:

  • createConnectionConsumer
  • createSharedConnectionConsumer
  • createDurableConnectionConsumer
  • createSharedDurableConnectionConsumer
  • setClientID
  • setExceptionListener
  • stop

The first four of them are not allowed because of restrictions on the use of threads in the container. The other ones because they may interfere with the
connection management functions of the container.

In the past this was not always enforced by J2EE container vendors but in order to pass the compatibility test suites, they have started really enforcing this policy. So with many containers, you will get an IllegalStateException or a JMSException when invoking setExceptionListener (which states that you’re not allowed to call it).

About WordPress Multisite

WordPress multisite has been aroung for quite some time now. But since many users still have a single-site installation and it’s not so easy to install WP Multisite using alternative ports (see this post), you will see that first not all plugins behave well in a multisite environment. Second, if you have a problem and google for it you will most likely find solutions working on single site installations but it’s not always to find out whether it will work in a multisite environment and if not, how to modify it to work in such an environment.

That’s the reason why I’ve started writing this post. First, I am a plugin author and need to have my plugins work in a multisite environment. Second, I am considering moving from multiple single sites to a multisite installation myself.

What’s WordPress Multisite?

WordPress Multisite was introduced in WordPress 3.0. It basically allows you to host multiple sites in a single WordPress installation. This means that once your WordPress installation is converted to multisite, it becomes a network of sites, with one central site used to administrate the network and a series of virtual sites building up your network.

The sites in this network are then not completely independent of another. They share some features. But they also have their own data and settings. So it is important to understand what’s shared and what not.

Just before we really start: in the WordPress multi-user solution which was available before Multisite was introduced in WordPress 3.0 the wording was a little bit different. That why I tend to sometimes mix it all. I’ll try to stick to the WP 3.0 wording but can’t promise I’ll always manage to do it.

What used to be called a Blog before WP 3.0 is now a site.
What used to be called a site before WP 3.0 is now a network.

What’s different with WordPress Multisite?

There are a few differences between a multisite based network of sites and a network of individual WordPress installations. Except for the WordPress PHP scripts themselves, a WordPress installation is basically made of some database content and the contents of the wp-content subdirectory.

wp-content

Let’s start with wp-content. It contains your plugins, themes and uploads (in three different directories).

Plugins

The plugins are installed for the whole network. Since you cannot install plugins per sites, there is no difference difference in the directory structure.

Note that this doesn’t mean that when you install a plugin it has to be active on all sites in the network. But it does mean that if a plugin is not installed globally, a site administrator cannot install this plugin just for a single site.

It also means that when a plugin is updated, it’s updated for all sites in the network. This is a good thing since you do not have to update each plugin for each site individually. On the other side, if you know that one site has a problem with the new version of a plugin and another site needs the new version, there is no way to handle this in a WordPress Multisite installation.

But even though the installation files are the same for all sites in the network give a specific plugin, this doesn’t mean that the settings of this plugins have to be the same for all sites. Depending on the plugin, you may want to have a site specific configuration or a network-wide configuration. See this post to learn more about network-wide plugin settings.

Also since the plugin files are available globally, this means that as a plugin developer, you do not need to take care of Multisite installations when enqueuing files (JavaScript or CSS files).

Themes

Themes work the same way as plugins in this case. Themes are installed globally and a user administrating a site in the network can activate these themes for his site. You can also restrict themes for individual sites and make them only available to a subset of the sites.

When working with themes and Multisite, it is important to keep in mind that whenever you update a file of the theme, you’re not modifying the file for one site but for all sites using this theme. So if you use a theme and would like to change a few things e.g. some CSS, instead of modifying the theme itself, you should do one of the following:

  • Create child theme to use in the individual sites
  • If it is already a child theme, you should create a copy of the theme before modifying it. The drawback is that you will not get automatic updates from wordpress.org anymore since it’s not the original theme anymore
  • Use themes which provide you with means to make the adaptations you need in their settings
  • If all you want to change are CSS styles, there are also plugins allowing you to load some additional styles
Uploads

The uploads work in a different way. Unlike the plugins and themes, upload can either be available on the network level or for a specific site.

So under the uploads directory, you will find one subfolder per year for the network-level uploads and a “sites” subdirectory. The “sites” subdirectory contains in turn one subdirectory per site. The names of this subdirectory are numbers representing the site ID. And in these sites subdirectories, you will again find a subdirectory per year.

The database

First of all, even in a Multisite installation, all tables, whether central tables or tables for individual sites in the network, are stored in a single MySQL database.

Assuming you chose the prefix “wp_” when installing WordPress, all central tables will be called wp_something and the site specific tables will be called wp_X_something, where X is the site ID.

The central site of your network will contain all tables you’d normally see in a single site WordPress installation. The individual sites in the network will only have a subset of these tables, containing the data which are stored per site:

  • wp_X_comments and wp_X_commentmeta contain data related to comments
  • wp_X_posts and wp_X_postmeta contain data related to posts
  • wp_X_terms, wp_X_term_relationships and wp_X_term_taxonomy contain data related to terms and taxonomies
  • wp_X_links contains site specific links
  • wp_X_options contains sites specific settings and options

wp_X_options also includes data about plugins and themes available on individual sites.

As said before, the central site also has all these tables. Additionally, it also has a few more database tables:

  • wp_blogs contains a list of all sites in the network. It’s still called “blogs” as in the old wording.
  • wp_blog_versions contains the db version of each site. I guess you usually don’t care about this table.
  • wp_users contains all users in the network
  • wp_usermeta contains data related to the users in wp_users
  • wp_registration_log contains a list of which users are registered on which site and when they registered
  • wp_signups contains a list of users which signed up on you network along with the activation key and when/whether they were activated
  • wp_site contains a list of networks and their primary domain. So you could have WordPress installation with multiple networks with each network having multiple sites
  • wp_sitemeta contains additional information about the networks

Handling WordPress Multisite in your code

First of all, most WordPress functions you might be using are related to the current site. So when you get e.g. a list of categories using get_categories(), you will see only the categories for this one site. If you need to get e.g. the categories for all sites, it will require some extra work.

There are 5 functions which are relevant for such things:

  • get_current_site()
  • switch_to_blog()
  • restore_current_blog()
  • wp_get_sites()

get_current_site

This function returns a site object containing the following public variables:

  • id: this is the site ID (number)
  • domain: this is the domain name of the current site
  • path: this is the URL path of the current site
  • site_name: this is the name of the current site

You can also get the ID of the current site using get_current_blog_id().

switch_to_blog

This function takes a site ID as parameter. If the provided site ID doesn’t exist, it will return false.

Once you’re done doing whatever you need to do with this other site, you can get back to the original site using restore_current_blog().

restore_current_blog

You should call restore_current_blog() after every call to switch_to_blog(). The reason is that WordPress keeps track of site switches and if you do not use restore_current_blog, the global variable $GLOBALS[‘_wp_switched_stack’] will be left with some values in there.

An alternative is to clean up $GLOBALS[‘_wp_switched_stack’] yourself:

$GLOBALS['_wp_switched_stack'] = array();
$GLOBALS['switched'] = false;

wp_get_sites

This function returns an array with the list of all sites in the network. You can use this function to get a list of all sites and iterate through them (and switching from one blog to the next one using switch_to_blog()).

Iterating through sites

In order to iterate through sites e.g. in order to get all categories for all sites, you can do the following:

// store the original current site
$original_site = get_original_site();

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

}

// return to the original site
switch_to_blog( $original_site->id );

Note that by doing this, the global variables tracking the site switches will not be reset. So it is recommended to rather do:

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

	// restore the original site
	restore_current_blog();
}

Or something like this:

// store the original current site
$original_site = get_original_site();

// get all sites
$sites = wp_get_sites();

// loop through the sites
foreach ( $sites as $site ) {

	// switch to this one site
	switch_to_blog( $site['blog_id'] );

	// do something here

}

// return to the original site
switch_to_blog( $original_site->id );

// manually reset the global variables tracking site switching
unset ( $GLOBALS['_wp_switched_stack'] );
$GLOBALS['switched'] = false; 

 TO BE CONTINUED…

Sybase ASE: using CPU affinity to prevent all engines from using the same CPU

Number of engines

ASE creates an OS-level process for each engine you define. You can change the number of engines by using the following command:

sp_configure "engine", N

(replace N by the number of engines you want to configure).

Usually, if you server is almost exclusively used for ASE and you have X CPU cores available, you will want to configure X-1 engines. So assuming you have a dedicated server with 4 CPU cores, you’ll want to configure 3 engines.

You can change this setting also in the SYBASE.cfg:

[Processors]
max online engines = 3
number of engines at startup = 3

The first line defines that there will be 3 engines and the second one that all 3 engines will be started by default.

Even though in many cases, it makes sense to set the same value to both parameters so that you automatically use all available engines. You can also set the second one to a lower value and benchmark the system with less engines and then bring one additional engine online after another.

Increasing the max number of online engines to higher number than the number of available logical CPU’s makes no sense. So I’d always recommend setting it to the total number of logical CPU’s or this number minus 1. Whether you bring them all online at startup or not depends on what else is running on the system and the specific workload you have on this server.

If you configure too many ASE engines for the underlying CPU’s, you will observe some significant loss of throughput. It is due to the high number of involuntary context switches.

Hyper-Threading

Hyper-Threading creates “virtual CPU’s”. So an application running on a system where Hyper-Threading is enabled will think that there are twice as many CPUs as physically available. ASE will not make any difference between virtual CPU’s and real CPU’s.

Although Hyper-Threading provides the ability run two ASE engines for one physical processor, you need to keep in mind that it is still not equivalent to running two engines with two physical processors.

In many cases, you should consider switching off Hyper-Threading. Except if you actually only have very few physical CPU cores available, HT will probably not bring you the expected benefits. You might run into problems because ASE doesn’t see that two CPU’s are basically running on the same physical CPU and it should rather distribute the load between physical CPU’s instead of distributing two queries to the 2 CPU’s running on same physical CPU. Also ASE could schedule queries run at the same time to run only on the virtual CPU’s while it would be better to run them on the real CPU’s (although theoretically, there should be no difference in performance between a virtual CPU and a real one).

But keep in mind that whether HT will bring performance benefits or on the contrary make you system slower really depends on the system itself. It highly depends on your actual hardware and workload. So benchmarking it on the specific system might still be a good idea.

ASE 15.7 comes with a threaded kernel. It takes advantage of threaded CPU architectures. It can thus reduce context switching to threads instead of processes, which brings a performance boost. But this is not related to Hyper-Threading.

Using the default kernel for ASE 15.7, each engine is a thread which lives in a thread pool instead of being an OS process (which was already the case for ASE on Windows even before ASE 15.7).

CPU Affinity

The processes for the different ASE engines have by default no affinity to the physical or virtual processors. Usually, it is not required to force any CPU affinity as ASE will handle it properly.

Here’s an example with 3 engines running and 3 heavy queries running in parallel:

ASE parallel queries good case

Here you see that there are 4 CPUs and 3 engines running using CPU0, CPU1 and CPU3. You can also press “f”, “j” and return in top to have an additional column displayed which will explicitely show which engine is using which CPU:

ASE parallel queries good case with CPU number

The column “p” shows that the 3 dataserver processes use the CPUs 0,1 and 3.

In some cases (not sure when or why this happens), you will see that all dataserver processes will be using the same CPU even though they are processing different queries. Since multiple tasks have to be handled by the same CPU, this will make each task slower and also cause alot of overhead due to task switching.

If this happens, you can use the a “dbcc tune” command to configure a CPU affinity. This can be done by using the following command:

dbcc tune(cpuaffinity, -1, "on")

The -1 parameter is the start CPU. This one will always be skipped. So setting it to -1 means that:

  • The first engine will be bound to CPU0
  • The second one to CPU1
  • The third one to CPU2

If you want to keep CPU0 for other processes, you’d use:

dbcc tune(cpuaffinity, 0, "on")

This will do the following:

  • The first engine will be bound to CPU1
  • The second one to CPU2
  • The third one to CPU3

After that you should see that all dataserver processes are using different CPUs.

Note: The setting will be active only until the ASE server is restarted. So the dbcc tune command must be reissued each time ASE is restarted.

Also note that some operating systems do not support CPU affinity. In this case, the dbcc tune command will be silently ignored.