Angular: Using a service to listen to DOM events

In an Angular Component or a Directive, you can use the @HostListener decorator to listen to DOM events e.g.:

@HostListener('document:click', ['$event'])
private onClick(event: Event) {
    ...
}

Using the decorator is very convenient. Unfortunately, it doesn’t work in Angular services. In a service, you have two options:

  1. Using window.addEventListener
  2. Using rxjs fromEvent

The first option looks like this:

document.addEventListener('click', () => {
    ...
});

With the second option, it looks more angular-ishy:

import { fromEvent } from 'rxjs';

...

fromEvent(document, 'click').subscribe(() => {
  ...
});

In order to test this (no matter which option, you’ve chosen above), you just need to do the following:

document.dispatchEvent(new MouseEvent('click'));

 

R: libgfortran – Library not loaded

If you get the following error message after installing R on your mac:

$ R
dyld: Library not loaded: /usr/local/lib/gcc/5/libgfortran.3.dylib
  Referenced from: /usr/local/Cellar/r/3.2.0_1/R.framework/Versions/3.2/Resources/lib/libR.dylib
  Reason: image not found
Trace/BPT trap: 5

You’ll need to reinstall gcc with the –with-fortran option:

brew reinstall gcc5 --with-fortran

 

AngularJS and Kendo UI: Watchers for Grid and Tree List

When you use the Kendo UI Grid or Tree List widgets in an AngularJS application, you will probably notice that with long grids/lists or with many columns, you’ll end up having quite a few watchers created (basically one watcher per cell). Unfortunately, it is not (yet) possible to use one time binding. The AngularJS documentation recommends keeping the number of watches under 2000 in order not to hurt the application performance (because it otherwise creates a high load during digest cycles).

The reason why so many watches are created is that the Kendo UI directives compile (using $compile) all cells so that you can use angular expression and directives in you column templates.

Disclaimer: All the instructions below only make sense if you do not need two way binding in your grid or tree list rows. If you do then you actually need these watchers.

Currently, the only way to prevent this is to initialize the Kendo Grid (or Tree List) widgets in you controller instead of using the Kendo directives. i.e. replacing this:

<kendo-treelist
	id="treelist"
	k-options="treelistKendoOptions"
	k-scope-field="treelistScope"
	k-columns="vm.treeListColumns"
	k-auto-bind="false">
</kendo-treelist>

By a simple div:

<div id="treelist"></div>

And creating the tree list (or the grid) in you controller:

$("#treelist").kendoTreeList($scope.treelistKendoOptions);

Additionally, you’ll have to replace attributes you had in your HTML code when using the directive by option or additional code. In my case, I had to move k-auto-bind to the auto-bind property in the options:

$scope.treelistKendoOptions = {
	...
	autoBind: false,
	...
};

Another attribute we were using is k-scope-field. This attribute defines a scope variable to which the Grid or Tree List should be bound. You can then call methods of the widget in your controller. The same can also be achieved when instantiating the widget from your controller:

$("#treelist").kendoTreeList($scope.treelistKendoOptions);
$scope.treelistScope = $("#treelist").data("kendoTreeList");

Of course, if you use a Grid and not a Tree List, you’d use kendoGrid instead of kendoTreeList.

Once you’ve done this, you’ll see the number of watchers has greatly reduce. But you might also see that the contents of some columns are broken. This basically happens whenever you use AngularJS expression (e.g. using some methods on the scope) in you column template e.g.:

template: "<span>{{ versionFormat(dataItem.Version) }}</span>

Since we’re not in the Angular world anymore, the templates are not compiled anymore (that’s after all that’s what we wanted to prevent). So you’ll need to add the logic you had in the template to the method defined as data source. In my example above, I’d call versionFormat for every row and replace dataItem.Version by the output value.

 

IDX10638: Cannot created the SignatureProvider, ‘key.HasPrivateKey’ is false, cannot create signatures. Key: Microsoft.IdentityModel.Tokens.RsaSecurityKey.

After updating the Microsoft.IdentityModel.Tokens library we were getting the following error message when creating JWT tokens:

System.InvalidOperationException
HResult=0x80131509
Message=IDX10638: Cannot created the SignatureProvider, ‘key.HasPrivateKey’ is false, cannot create signatures. Key: Microsoft.IdentityModel.Tokens.RsaSecurityKey.
Source=Microsoft.IdentityModel.Tokens
StackTrace:
at Microsoft.IdentityModel.Tokens.AsymmetricSignatureProvider..ctor(SecurityKey key, String algorithm, Boolean willCreateSignatures)
at Microsoft.IdentityModel.Tokens.CryptoProviderFactory.CreateSignatureProvider(SecurityKey key, String algorithm, Boolean willCreateSignatures)
at Microsoft.IdentityModel.Tokens.CryptoProviderFactory.CreateForSigning(SecurityKey key, String algorithm)
at System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler.CreateEncodedSignature(String input, SigningCredentials signingCredentials)
at System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler.WriteToken(SecurityToken token)

The code where this happened looked like this and was working fine before the update:

var buffer = Convert.FromBase64String(Base64Cert);
var signingCertificate = new X509Certificate2(buffer, CertificatePassword);

var identity = new ClaimsIdentity(Claims);
var data = new AuthenticationTicket(identity, null);

if (signingCertificate.PrivateKey is RSACryptoServiceProvider rsaProvider)
{
	var key = new RsaSecurityKey(rsaProvider);
	var signingCredentials = new SigningCredentials(key, SecurityAlgorithms.RsaSha256Signature);

	var token = new JwtSecurityToken(
		issuer: TokenIssuer,
		audience: TokenAudience,
		claims: data.Identity.Claims,
		notBefore: DateTime.UtcNow,
		expires: DateTime.UtcNow.AddMinutes(TokenValidityInMinutes),
		signingCredentials: signingCredentials
	);

	var tokenString = new JwtSecurityTokenHandler().WriteToken(token);
	Console.WriteLine(tokenString);
}
else
{
	Console.Error.WriteLine("signingCertificate.PrivateKey is not an RSACryptoServiceProvider");
}

Debugging the code, I saw that signingCertificate.HasPrivateKey was true but key.HasPrivateKey was false

In order to solve it, two small changes were required:

  1. Add a keyStorageFlags parameter to the X509Certificate2 constructor so that the imported keys are marked as exportable
  2. Use the ExportParameters method to retrieve the raw RSA key in the form of an RSAParameters structure including private parameters.

So using the following code worked without exception:

var buffer = Convert.FromBase64String(Base64Cert);
var signingCertificate = new X509Certificate2(buffer, CertificatePassword, X509KeyStorageFlags.Exportable);

var identity = new ClaimsIdentity(Claims);
var data = new AuthenticationTicket(identity, null);

if (signingCertificate.PrivateKey is RSACryptoServiceProvider rsaProvider)
{
	var key = new RsaSecurityKey(rsaProvider.ExportParameters(true));
	var signingCredentials = new SigningCredentials(key, SecurityAlgorithms.RsaSha256Signature);

	var token = new JwtSecurityToken(
		issuer: TokenIssuer,
		audience: TokenAudience,
		claims: data.Identity.Claims,
		notBefore: DateTime.UtcNow,
		expires: DateTime.UtcNow.AddMinutes(TokenValidityInMinutes),
		signingCredentials: signingCredentials
	);

	var tokenString = new JwtSecurityTokenHandler().WriteToken(token);
	Console.WriteLine(tokenString);
}
else
{
	Console.Error.WriteLine("signingCertificate.PrivateKey is not an RSACryptoServiceProvider");
}

 

Cross domain and cross browser web workers

What are web workers?

A web worker is a script that runs in the background in an isolated way. They run in a separate thread and can perform tasks without interfering with the user interface. Since the scripts on a page are executed in a single thread of execution, a long running script will make the page unresponsive. Web workers allow to hide this from the user and let the browser continue with normal operation while the script is running in the background.

Limitations of web workers

Web workers are great because they can perform computationally expensive tasks without interrupting the user interface. But they also bring quite a few limitations:

  1. Web workers do not have access to the DOM
  2. They do not have access to the document object
  3. These workers do not have access to the window object
  4. Web workers do not have direct access to the parent page.
  5. They will not work if the web page is being served a file:// URL
  6. You are limited by the same origin policy i.e. the worker script must be served from the same domain (including the protocol) as the script that is creating the worker

The first four limitation mean that you cannot move all your Javascript logic to webworkers. The fifth one means that even when developing, you will need to serve your page through a web server (which can be on the localhost).

The purpose of this article is to see how to work around the last limitation (same origin policy). But first let’s briefly see how to use a worker.

How to use web workers

Creating a worker is first of all pretty straight forward:

//Creating the worker
var worker = new Worker(workerUrl);

//Registering a callback to process messages from the worker
worker.onmessage = function(event) { ... });

//Sending a message to the worker
worker.postMessage("Hey there!");

//Terminating the worker
worker.terminate();
worker = undefined;

In the worker things then work in a similar way:

//Registering a callback to process messages from the parent
self.onmessage = function (event) { ... });

//Sending a message to the parent
self.postMessage("Hey there!");

Now the problem I had was that I needed to run a worker provided by a partner and therefore served from a different domain. So new Worker(...); will fail with an error similar to this:
Uncaught SecurityError: Failed to construct 'Worker': Script at 'xxx' cannot be accessed from origin 'xxx'

Cross domain workers

So the browser will not allow you to create a worker with a URL pointing to a different domain. But it will allow you to create a blob URL which can be used to initialize your worker.

Blob URLs

A blob is in general something which doesn’t necessarily in JavaScript “format” but it can be. You can then have the browser internally generate a URL. This URL uses a pseudo protocol called “blob”. So you get a URL in this form: blob:origin/UID. The origin is the origin of the page where you create the blob URL and the UID is a generated unique ID e.g. blob:https://mydomain/8126d58c-edbc-ee14-94a6-108b8f215304.

A blob can be created this way:

var blob = new Blob(["some JavaScript code;"], { "type": 'application/javascript' });

The following browser versions seem to support the blob constructor: IE 10, Edge 12, Firefox 13, Chrome 20, Safari 6, Opera 12.1, iOS Safari 6.1, Android browser/Chrome 53. So if you want to support an older version you will need to revert to the BlobBuilder interface has been deprecated in favor of the newly introduced Blob constructor in the newer browsers:

var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
blobBuilder.append("some JavaScript code;");
var blob = blobBuilder.getBlob('application/javascript');

In order to support old and new browsers, you will want to try using the Blob constructor and revert to the BlobBuilder in case you get an exception:

var blob;
try {
	blob = new Blob(["some JavaScript code;"], { "type": 'application/javascript' });
} catch (e) {
	var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
	blobBuilder.append("some JavaScript code;");
	blob = blobBuilder.getBlob('application/javascript');
}

You can then generate a URL object from the blob:

var url = window.URL || window.webkitURL;
var blobUrl = url.createObjectURL(blob);

Finally, you can create your web worker using this URL object:

var worker = new Worker(blobUrl);

Now, the piece of JavaScript you want to have in your blob would be this one liner which will load the worker file:

importScripts('https://mydomain.com/worker.js');

So a method to load, create and return the worker both in case we are in a same-domain scenario or in a cross-domain scenario would look like this:

function createWorker (workerUrl) {
	var worker = null;
	try {
		worker = new Worker(workerUrl);
	} catch (e) {
		try {
			var blob;
			try {
				blob = new Blob(["importScripts('" + workerUrl + "');"], { "type": 'application/javascript' });
			} catch (e1) {
				var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
				blobBuilder.append("importScripts('" + workerUrl + "');");
				blob = blobBuilder.getBlob('application/javascript');
			}
			var url = window.URL || window.webkitURL;
			var blobUrl = url.createObjectURL(blob);
			worker = new Worker(blobUrl);
		} catch (e2) {
			//if it still fails, there is nothing much we can do
		}
	}
	return worker;
}

Cross-browser support

Unfortunately, we still have another problem to handle: in some browser, the failed creation of a web worker will not result in an exception but with an unusable worker in the cross-domain scenario. In this case, an error is raised as an event on the worker. So you would need to consider this also as a feedback that the creation of the worker failed and that the fallback with the blob URL should be used.

In order to do this, you should probably first extract the fallback into its own function:

function createWorkerFallback (workerUrl) {
	var worker = null;
	try {
		var blob;
		try {
			blob = new Blob(["importScripts('" + workerUrl + "');"], { "type": 'application/javascript' });
		} catch (e) {
			var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
			blobBuilder.append("importScripts('" + workerUrl + "');");
			blob = blobBuilder.getBlob('application/javascript');
		}
		var url = window.URL || window.webkitURL;
		var blobUrl = url.createObjectURL(blob);
		worker = new Worker(blobUrl);
	} catch (e1) {
		//if it still fails, there is nothing much we can do
	}
	return worker;
}

Now we can implement the logic to handle the different cases:

var worker = null;
try {
	worker = new Worker(workerUrl);
	worker.onerror = function (event) {
		event.preventDefault();
		worker = createWorkerFallback(workerUrl);
	};
} catch (e) {
	worker = createWorkerFallback(workerUrl);
}

Of course, you could save yourself this try/catch/onerror logic and just directly use the fallback which should also work in all browsers.

Another option I’ve been using is still trying the get the worker to get initialized with this logic but only in case of same domain scenarios.

In order to do this, you’d need to first implement a check whether we are in a same-domain or a cross-domain scenario e.g.:

function testSameOrigin (url) {
	var loc = window.location;
	var a = document.createElement('a');
	a.href = url;
	return a.hostname === loc.hostname && a.port === loc.port && a.protocol === loc.protocol;
}

It just creates an anchor tag (which will not be bound to the dom), set the URL and then checking the different part of the URL relevant for identifying the origin (protocol, hostname and port).

With this function, you can then update the logic in this way:

var worker = null;
try {
	if (testSameOrigin(workerUrl)) {
		worker = new Worker(workerUrl);
		worker.onerror = function (event) {
			event.preventDefault();
			worker = createWorkerFallback(workerUrl);
		};
	} else {
		worker = createWorkerFallback(workerUrl);
	}
} catch (e) {
	worker = createWorkerFallback(workerUrl);
}

This may all sounds overly complex just to end up using a web worker but unfortunately because of cross-domain restrictions and implementation inconsistencies between browser, you very often need to have such things in your code.

 

 

AngularJS: Sharing data between controllers

Even though you should try to keep things decoupled and your directives, controllers and services should rather be self contained, you sometimes do need to share data between controllers. There are basically two main scenarios:

  1. Sharing data between a parent and a child controller
  2. Sharing data between two mostly unrelated controllers e.g. two siblings

Sharing data between a parent and a child controller

Let’s assume you have two controllers, one controller being the parent controller and a child controller:

<div ng-controller="ParentCtrl">
  <div ng-controller="ChildCtrl as vm">
  </div>
</div>

Where the controllers are defined as:

var app = angular.module('sharing', []);

app.controller('ParentCtrl', function($scope) {
});

app.controller('ChildCtrl', function($scope) {
});

Let’s now define a user name in the parent controller:

app.controller('ParentCtrl', function($scope) {
  $scope.user = {
    name: "Henri"
  };
});

Note that you shouldn’t define your variable as a primitive in the scope since this will cause shadow property to appear in the child scope hiding the original property on the parent scope (caused by the JavaScript prototype inheritance). So you should define an object in the parent scope and defined the shared variable as a property of this object.

There are now three ways to access this shared variable:

  1. using $parent in HTML code
  2. using $parent in child controller
  3. using controller inheritance

Using $parent in HTML code

You can directly access variables in the parent scope by using $parent:

Hello {{$parent.user.name}}

Using $parent in child controller

Of course, you could also reference the shared variable using $parent in the controller and expose it in the scope of the child controller:

app.controller('ChildCtrl', function($scope) {
  $scope.parentUser = $scope.$parent.user;
});

Then, you can use this scope variable in your HTML code:

Hello {{parentUser.name}}

Using controller inheritance

But since the scope of a child controller inherits from the scope of the parent controller, the easiest way to access the shared variable is actually not to do anything. All you need to do is:

Hello {{user.name}}

Sharing data between two mostly unrelated controllers e.g. two siblings

If you need to share data between two controllers which do not have a parent-child relationship, you can neither use $parent nor rely on prototype inheritance. In order to still be able to share data between such controllers, you have three possibilities:

  1. Holding the shared data in a factory or service
  2. Holding the shared data in the root scope
  3. Using events to notify other controller about changes to the data

Holding the shared data in a factory or service

AngularJS factories (and services) can contain both methods (business logic) and properties (data) and can be injected in other components (e.g. your controllers). This allows you to define a shared variable in a factory, inject it in both controllers and thus bind scope variables in both controllers to this factory data.

The first step is to define a factory holding a shared value:

app.factory('Holder', function() {
  return {
    value: 0
  };
});

Then you inject this factory in your two controllers:

app.controller('ChildCtrl', function($scope, Holder) {
  $scope.Holder = Holder;
  $scope.increment = function() {
    $scope.Holder.value++;
  };
});

app.controller('ChildCtrl2', function($scope, Holder) {
  $scope.Holder = Holder;
  $scope.increment = function() {
    $scope.Holder.value++;
  };
});

In both controllers, we bind the Holder factory to a scope variable and define a function which can be called from the UI and updates the vale of the shared variable:

<div>
  <h2>First controller</h2>
  <button>+</button>{{Holder.value}}
</div>
<div>
  <h2>Second controller</h2>
  <button>+</button>{{Holder.value}}
</div>

No matter which “+” button you press, both values will be incremented (or rather the shared value will be incremented and reflected in both scopes).

Holding the shared data in the root scope

Of course, instead of using a factory or a service, you can also directly hold the shared data in the root scope and reference it from any controller. Although this actually works fine, it has a few disadvantages:

  1. Whatever is present in the root scope is inherited by all scopes
  2. You need to use some naming conventions to prevent multiple modules or libraries from overwriting each other’s data

In general, it’s much cleaner to encapsulate the shared data in dedicated factories or services which are injected in the components which require access to this share data than making these data global variables in the root scope.

Using events to notify other controller about changes to the data

In case, you do not want to bind both scopes through factory data (e.g. because you only want to propagate changes from one scope to another one on some condition), you can also rely on event notifications between the controllers to sync the data. There are three functions provided by AngularJS to handle events:

  • $emit is used to trigger an event and propagate it to the current scope and recursively to all parent scopes
  • $broadcast is used to trigger an event and propagate it to the current scope and recursively to all child scopes
  • $on is used to listen to event notification on the scope
Using $emit

Since $emit is propagating events up in the scope hierarchy, there are two use cases for it:

  • Propagating events to parent controllers
  • Efficiently propagating events to unrelated controllers through the root scope

In the first scenario, you emit an event on the child controller scope:

$scope.$emit("namechanged", $scope.name);

And listen to this event on the parent controller scope:

$scope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

In the second scenario, you emit an event on the root scope:

$rootScope.$emit("namechanged", $scope.name);

And listen to this event on the root scope as well:

$rootScope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

In this case there is effectively no further propagation of the event since the root scope has no parent scope. It is thus the preferred way to propagate events to unrelated scopes (and should be preferred to $broadcast in such scenarios).

There is one thing you need to consider when registering to events on the root scope: in order to avoid leaks when controllers are created and destroyed multiple times, you need to unregister the event listeners. A function to unregistered is returned by $emit. You just need to register this function as a handler for the $destroy event in your controller, replacing the code above by:

var destroyHandler = $rootScope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

$scope.$on('$destroy', destroyHandler);
Using $broadcast

Theoretically, you could also use $broadcast to cover two scenarios:

  • Propagating events to child controllers
  • Propagating events to unrelated controllers through the root scope

Effectively, the second use case doesn’t make much sense since you would basically trigger an event on the root scope and propagate it to all child scopes which is much less efficient than propagating and listening to events on the root scope only.

In the first scenario, you broadcast an event on the parent controller scope:

$scope.$broadcast("namechanged", $scope.name);

And listen to this event on the child controller scope:

$scope.$on("namechanged", function(event, name) {
  $scope.name = name;
});
Similarities and differences between $emit and $broadcast

Both $emit and $broadcast dispatch an event through the scope hierarchy notifying the registered listeners. In both cases, the event life cycle starts at the scope on which the function was called. Both functions will pass all exceptions thrown by the listeners to $exceptionHandler.
An obvious difference is that $emit propagates upwards and $broadcast downwards (in the scope hierarchy). Another difference is that when you use $emit, the event will stop propagating if one of the listeners cancels it while the event cannot be canceled when propagated with $broadcast.

Demo

You can see the code from this post in action on plunker:

When to use $timeout

$timeout is the angular equivalent to settimeout in JavaScript. It is basically a wrapper aroung window.settimeout. So the basic functionality provided by $timeout is to have a piece of code executed asynchronously. As JavaScript doesn’t support thread spawning, asynchronously here means that the execution of the function is delayed.

Differences to settimeout

There are basically 2 differences between using $timeout and using directly settimeout:

  1. $timeout will by default execute a $apply on the root scope after the function is executed. This will cause a model dirty checking to be run.
  2. $timeout wraps the call of the provided function in a try/catch block and forwards any thrown exception to the global exception handler (the $exceptionHandler service).

Parameters

All parameters of $timeout are optional (although it of course doesn’t make much sense to call it without any parameters).

The first parameter is a function which execution is to be delayed.

The second parameter is a delay in milliseconds. You should now rely though on the delay to be absolutely respected. The minimum delay should be of 4 milliseconds in all modern browsers (you can set a smaller value but will probably not see a difference).

The third parameter is a flag (i.e. boolean, true or false) which when set to true, will cause $timeout not to execute $apply once the function is executed.

All additional parameters will be handled as parameters for the provided function and will be passed to the called function.

Delayed function execution

So two of the scenarios when you would want to use $timeout (or settimeout in this case) is:

either when you want to execute a function later on:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later
};

$timeout(executeInTenSeconds, 10000);

or when you want to execute a function when the execution of the current block is finished:

var executeLater = function() {
    //Code executed once we're done with the current execution
};

$timeout(executeInTenSeconds);

Additional parameters

$timeout passes all parameters after the third one to the function being called. You can thus pass parameters to the function like this:

var executeInTenSeconds = function(increment) {
    $scope.myValue += increment;
};

$timeout(executeInTenSeconds, 10000, true, 10);

This will basically execute executeInTenSeconds(10); after 10 seconds, trigger the global exception manager in case of an unhandled exception and run a digest cycle afterwards.

Model dirty checking

A scenario where you’d rather use $timeout than settimeout is when you are modifying the model and need a digest cycle (dirty check) to run after the provided function is executed e.g.:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later
	$scope.myScopeVar = "hello";
};

$timeout(executeInTenSeconds, 10000);

After 10 seconds our function will be called, it will change the value of a scope variable and after that a digest cycle will be triggered which will update the UI.

But there are cases when you actually do not need a model dirty checking (e.g. when you call the server but do not need to reflect the results of this call in you application). In such cases, you should use the third parameter of $timeout (invokeApply) e.g.:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later and which doesn't require a digest cycle	
};

$timeout(executeInTenSeconds, 10000, false);

When the third parameter of $timeout is set to false, $timeout will skip the $rootScope.$apply() which is usually executed after the provided function.

Global exception handler

Since $timeout also wraps the provided function in a try/catch block and forwards all unhandled exceptions to the global exception handler, you may also use it in other cases too. I personally feel this is just a solution for lazy developers as you should rather wrap your function code in a try/catch block e.g.:

function($exceptionHandler) {
	try {
	  // Put here your function code
	} catch (e) {
	  // Put here some additional error handling call
	  $exceptionHandler(e); // Trigger the global exception handler
	}
	finally {
	  // Put here some cleanup logic
	}
}

And instead of duplicating this code everywhere you should probably consider writing your own reusable provider e.g.:

'use strict';

function $ExceptionWrapperProvider() {
  this.$get = ['$exceptionHandler', function($exceptionHandler) {

    function exceptionWrapper(fn) {
        try {
			return fn.apply(null, arguments);
        } catch (e) {
          $exceptionHandler(e);
        }
		return null;
    }

    return exceptionWrapper;
  }];
}

$timeout without function

All parameters of $timeout are optional, even the function. So why would you need to call $timeout without a function ? Basically, if you call $timeout without a function, there is no function execution, thus no exceptions and all that remains is the delay and the digest cycle. So calling $timeout without a function e.g.:

$timeout(10000);

Basically just triggers a digest cycle after the provided delay. In the example above, it would cause a digest cycle to be run after 10 seconds. But of course doing this is probably a sign that something’s wrong in your application (you should need to run delayed digest cycles but should rather run them after some logic has been executed). But if something is run asynchronously outside of your code but you do not get called back when it’s done and this external code somehow modifies something in the angular scopes, you might need to handle it this way. In this case, using $timeout is not really different from using:

settimeout( function() {
	$rootScope.$apply();
}, 10000);

It’s just less lines of code…

Failed to initialize the PowerShell host while installing packages

While trying to install the NEST Nuget package, I got the following error when the JSON.NET post-install powershell script was executed:

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first.

I then tried to update the execution policy by executing the following in a PowerShell opened as Administrator:

start-job { Set-ExecutionPolicy Unrestricted } -RunAs32 | wait-job | Receive-Job

Unfortunately, this failed with the following error message:

Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a
more specific scope. Due to the override, your shell will retain its current effective execution policy of
RemoteSigned. Type “Get-ExecutionPolicy -List” to view your execution policy settings. For more information please see
“Get-Help Set-ExecutionPolicy”.
+ CategoryInfo : PermissionDenied: (:) [Set-ExecutionPolicy], SecurityException
+ FullyQualifiedErrorId : ExecutionPolicyOverride,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand

I then tried to set it to RemoteSigned instead of Unrestricted:

start-job { Set-ExecutionPolicy RemoteSigned } -RunAs32 | wait-job | Receive-Jobb

This didn’t cause any error but even after restarting Visual Studio I was not able to install JSON.NET.

The only thing that worked was reinstalling the NuGet Package Manager for Visual Studio:

  1. In Tools -> Extensions and Updates, uninstall NuGet Package Manager for Visual Studio
  2. Restart Visual Studio
  3. In Tools -> Extensions and Updates, install NuGet Package Manager for Visual Studio
  4. Restart Visual Studio

And after I could install NEST including JSON.NET!

 

Cross-document communication with iframes

Using iframes (inline frames) is often considered bad practice since it can hurt you from a SEO point view (contents of the iframes will not be indexed by search engines). But whenever you have an application which doesn’t require indexing of contents (e.g. because the content is only visible after the user has been authenticated and authorized) or you need to embed content from other web sites/apps, iframes provide a nice mechanism to include content in your app and ensure that this doesn’t cause any major security issues.

Please refer to the MDN which contains a good description of iframes and a few examples.

Accessing an iframe and its content

The first step in building using iframes is of course to define an iframe tag in your HTML code which will define where in the DOM, the external resources will be taken over:

<iframe id="iframe1"></iframe>

Now you have added this tag in your HTML code, you will most probably want to access it with JavaScript to set a URL to be loaded, define how the iframe contents should be displayed (e.g. the width and height of the iframe) and maybe access some of the DOM elements in the iframe. This section will show you how this can be done.

Please keep in mind that things are relatively easy when working with iframes which contents are loaded from the same host/domain. If you work with contents from other hosts/domains, you’ll need to have a look at the next sections as well.

Setting the URL and styles of an iframe

Setting the URL which contents need to be loaded in the iframe, just means setting the src property of the iframe object. And styling it can be done by using the style property. Here’s a short example:

var iframe1 = document.getElementById('iframe1');
iframe1.style.height = '200px';
iframe1.style.width = '400px';
iframe1.src = 'iframe.html';

In this case the source for the iframe contents is an HTML page on the same host/domain but you could also define a complete URL pointing to another location.

Detecting when the iframe’s content has been loaded

Before you can access the contents of the iframe, you will have to wait for the iframe contents to be loaded (just like you should wait for the contents of your page to be fully loaded before accessing and manipulating them). This is done by defining an onload callback on the iframe:

iframe1.onload = function() {
    // your code here
}

Accessing the contents of the iframe

Once you’ve made sure that the iframe contents have been loaded, you can access it’s document using either the contentDocument property of the iframe object or by using the document property of the contentWindow property of the iframe. Of course, it’s just easier to use contentDocument. Unfortunately, it’s not supported by older versions of Internet Explorer so to make sure that it works in all browsers, you should check whether the contentDocument property exists and if not, revert to contentWindow.document:

var frameDocument = iframe1.contentDocument ? iframe1.contentDocument : iframe1.contentWindow.document;
var title = frameDocument.getElementsByTagName("h1")[0];
alert(title.textContent);

Interactions between iframe and parent

Now that you can load content in the iframe, define how it should be displayed and access its content, you might also need to go one step further and access the parent document (or the iframes properties) from the iframe itself.

Accessing the parent document

Just like we accessed the contents of the iframe from a script in the parent page, we can do the opposite (currently ignoring cross-domain issues) by using the document property of the parent object:

var title = parent.document.getElementsByTagName("h1")[0];
alert(title.textContent);

Accessing the iframe properties from the iframe

If you have some logic based on the styles of the iframe tag in the parent page (e.g. its width or height), you can use window.frameElement which will point you to the containing iframe object:

var iframe = window.frameElement;
var width = iframe.style.width;
alert(width);

Calling a JavaScript function defined in the iframe

You can call JavaScript functions defined in the iframe (and bound to its window) by using the contentWindow property of the iframe object e.g.:

iframe1.contentWindow.showDialog();

Calling a JavaScript function defined in the parent

Similarly, you can call a JavaScript function defined in the parent window by using the window property of the parent object e.g.:

parent.window.showDialog2();

Same Origin Policy

The Same Origin Policy is an important concept when using JavaScript to interact with iframes. This is basically a security policy enforced by your browser and preventing documents originating from different domains to access each other’s properties and methods.

What’s the same origin?

Two documents have the same origin, if they have the same URI scheme/protocol (e.g. http, https…), the same host/domain (e.g. google.com) and the same port number (e.g. 80 or 443).

So documents loaded from:

  • http://google.com and https://google.com do not have the same origin since they have different URI schemes (http vs https)
  • https://benohead.com and https://benohead.com:8080 do not have the same origin since they have port numbers (80 vs 8080)
  • https://benohead.com and https://www.benohead.com do not have the same origin since they have different hostnames (even if the document loaded from www.benohead.com would be the same if loaded from benohead.com)
  • http://kanban.benohead.com and https://benohead.com do not have the same origin since sub-domains also count as different domains/hosts

But documents loaded from URIs where other parts of the URI are different share the same origin e.g.:

  • https://benohead.com and https://benohead.com/path: folders are not part of the tuple identifying origins
  • https://benohead.com and http://user:password@benohead.com: username and password are not part of the tuple identifying origins
  • https://benohead.com and https://benohead.com/path?query: query parameters are not part of the tuple identifying origins
  • https://benohead.com and https://benohead.com/path#fragment: fragments are not part of the tuple identifying origins

Note that depending on your browser https://benohead.com and https://benohead.com:80 (explicitly stating the port number) might or might not be considered the same origin.

Limitations when working with different origins

A page inside an iframe is not allowed to access or modify the DOM of its parent and vice-versa unless both have the same origin. So putting it in a different way: document or script loaded from one origin is prevented from getting or setting properties of a document from another origin.

Interacting cross-domain

Of course, in most cases using iframes makes sense when you want to include contents from other domains and not only when you want to include contents from the same domain. Fortunately, there are a few options for handling this depending on the exact level of cross-domain interaction which is required.

URL fragment hack

What you would have done 5 to 10 years ago is workaround the limitation by using the fact that any window/iframe can set the URL of another one and that if you only change the fragment part of a URL (e.g. what’s after the hash sign #), the page doesn’t reload. So basically, this hack involves sending some data to another iframe/window, by getting a reference to this iframe/window (which is always possible), adding a fragment (or changing it) in order to pass some data (effectively using the fragment as a data container and setting the URL as a trigger event).

Using this hack comes with two main limitations:

  • This hack doesn’t seem to work anymore in some browsers (e.g. Safari and Opera) which will not allow child frame to change a parent frame’s location.
  • You’re limited to the possible size of fragment identifiers which depends on the browser limitation and on the size of the URL without fragment. So sending multiple kilobytes of data between iframes using this technique might prove difficult.
  • It may cause issues with the back button. But this is only a problem if you send a message to your parent window. If the communication only goes from your parent window to iframes or between iframes, then you won’t see the URL and bookmarking and the back button will not be a problem.

So I won’t go into more details as how to implement this hack since there are much better ways to handle it nowadays.

window.name

Another hack often used in the past in order to pass data from an iframe to the parent. Why window.name ? Because window.name persists accross page reloads and pages in other domains can read or change it.

Another advantage of window.name is that it’s very easy to use for storage:

window.name = '{ "id": 1, "name": "My name" }';

In order to use it for communicating with the parent window, you need to introduce some polling mechanism in the parent e.g.:

var oldName = iframe1.contentWindow.name;
var checkName = function() {
  if(iframe1.contentWindow.name != oldName) {
    alert("window name changed to "+iframe1.contentWindow.name);
    oldName = iframe1.contentWindow.name;
  }
}
setInterval(checkName, 1000);

This code will check every second whether the window.name on the iframe has changed and display it when it has.

You seem to be able to store up to 2MB of data in window.name. But keep in mind that window.name was never actually not designed for storing or exchanging data . So browser vendor support could be dropped at any time.

Server side proxy

Since the single origin policy is enforced by the browser a natural solution to work around it is to access the remote site from your server and not from the browser. In order to implement it, you’ll need a proxy service on your site which forwards requests to the remote site. Of course, you’ll have to limit the use of the server side proxy in order not to introduce an exploitable security hole.

A cheap implementation of such a mechanism, could be to use the modules mod_rewrite or mod_proxy for the Apache web server to pass requests from your server to some other server.

document.domain

If all you’re trying to do is have documents coming from different subdomains interact, you can set the domain which will be used by the browser to check the origin in both document using the following JavaScript code:

document.domain = "benohead.com";

You can only set the domain property of your documents to a suffix (i.e. parent domain) of the actual domain. So if you loaded your document from “kanban.benohead.com” you can set it to “benohead.com” but not to “google.com” or “hello.benohead.com” (although you wouldn’t need to set it to “hello.benohead.com” since you can set the domain to “benohead.com” for both windows/frames loaded from “kanban.benohead.com” and “hello.benohead.com”).

JSON with Padding (JSONP)

Although not directly related to the inter-domain and inter-frame communication, JSONP allows you to call a remote server and have it execute some JavaScript function define on your side.

The basic idea behind JSONP is that the script tag bypasses the same-origin policy. So you can call a server using JSONP and provice a callback method and the server will perform some logic and return a script which will call this callback method with some parameters. So basically, this doesn’t allow you to implement a push mechanism from the iframe (loaded from a different domain) but allows you to implement a pull mechanism with callbacks.

One of the main restrictions when using JSONP is that you are restricted to using GET requests.

Cross-Origin Resource Sharing (CORS)

CORS is a mechanism implemented as an extension of HTTP using additional headers in the HTTP requests and responses.

Except for simple scenarios where no extra step is required, in most cases enabling CORS means that an extra HTTP request is sent from the browser to the server:

  1. A preflight request is sent to query the CORS restrictions imposed by the server. The preflight request is required unless the request matches the following:
    • the request method is a simple method (i.e. GET, HEAD, or POST)
    • the only headers manually set are headers set automatically by the user agent (e.g. Connection and User-Agent) or one of the following: Accept, Accept-Language, Content-Language, Content-Type.
    • the Content-Type header is application/x-www-form-urlencoded, multipart/form-data or text/plain.
  2. The actual request is sent.

The preflight request is an OPTIONS request with an Origin HTTP header set to the domain that served the parent page. The response from the server is either an error page or an HTTP response containing an Access-Control-Allow-Origin header. The value of this header is either indicating which origin sites are allowed or a wildcard (i.e. “*”) that allows all domains.

Additional Request and Response Headers

The CORS specification defines 3 additional request headers and 6 additional response headers.

Request headers:

  • Origin defines where the CORS request comes from
  • Access-Control-Request-Method defines in the preflight request which request method will later be used in the actual request
  • Access-Control-Request-Headers defines in the preflight request which request headers will later be used in the actual request

Response headers:

  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials
  • Access-Control-Expose-Headers
  • Access-Control-Max-Age
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Headers

How does it work?

The basic CORS workflow with preflight requests looks like this:

CORS

The browser send an HTTP OPTIONS request to the remote server with the origin of the page and the request method to be used. The remote server responds with an allowed origin and allowed methods headers. The browser then proceeds with the actual HTTP request. If you want to use some additional headers, an Access-Control-Request-Headers will also be sent in the OPTIONS request and an Access-Control-Allow-Headers will be returned in the response. You can then use this additional header in the actual request.

CORS vs. JSONP

Although CORS is supported by most modern web browsers, JSONP works better with older browsers. JSONP only supports the GET request method, while CORS also supports other types of HTTP requests. CORS makes it easier to create a secure cross-domain environment (e.g. by allowing parsing of responses) while using JSONP can cause cross-site scripting (XSS) issues, in case the remote site is compromised. And using CORS makes it easier to provide good error handling on top of XMLHttpRequest.

Setting up CORS on the server

In order to allow CORS requests, you only have to configure the server to add the following header to its response:

Access-Control-Allow-Origin: *

Of course, instead of a star, you can also return a single origin (e.g. benohead.com) or using a wildcard in the origin (e.g. *.benohead.com). This header can also contain a space separated list of origins. In practice, maintaining an exhaustive list of all allowed origins might be difficult, so in most cases you’ll either have a star, a single origin or a single origin and an origin containing a star e.g. benohead.com *.benohead.com. If you want to support a specific list of origins, you’ll have to have the web server check whether the provided origin is in a given list of allowed origins and return this one origin in the response to the HTTP call.

And if the requests to the web servers will also contain credentials, you need to configure the web server to also return the following header:

Access-Control-Allow-Credentials: true

If you are expecting not only simple requests but also preflight requests (HTTP OPTIONS), you will also need to set the Access-Control-Allow-Methods header in the response to the browser. It only needs to contains the method requested in the Access-Control-Request-Method header of the request. But usually, the complete list of allowed methods is sent back e.g.:

Access-Control-Allow-Methods: POST, GET, OPTIONS

Security and CORS

CORS in itself is not providing with the means to secure your site. It just helps you defining how the browsers should be handling access to cross-domain resource (i.e. cross-domain access). But since it relies on having the browser enforce the CORS policies, you need to have an additional security layer taking care of authentication and authorization.

In order to work with credential, you have set the withCredentials property to true in your XMLHttpRequest and the server needs put an additional header in the response:

Access-Control-Allow-Credentials: true

HTML5 postMessage

Nowadays, the best solution for direct communication between a parent page and an iframe is using the postMessage method available with HTML5. Using postMessage, you can send a message from one side to the other. The message contains some data and an origin. The receiver can then implement different behaviors based on the origin (also note that the browser will also check that the provided origin makes sense).

parent to iframe

In order to send from the parent to the iframe, the parent only has to call the postMessage function on the contentWindow of the iframe object:

iframe1.contentWindow.postMessage("hello", "http://127.0.0.1");

On the iframe side, you have a little bit more work. You need to define a handler function which will receive the message and register it as an event listener on the window object e.g.:

function displayMessage (evt) {
	alert("I got " + evt.data + " from " + evt.origin);
}

if (window.addEventListener) {
	window.addEventListener("message", displayMessage, false);
}
else {
	window.attachEvent("onmessage", displayMessage);
}

The parameter to the handler function is an event containing both the origin of the call and the data. Typically, you’d check whether you’re expecting a message from this origin and log or display an error if not.

iframe to parent

Sending messages in the other direction works in the same way. The only difference is that you call postMessage on parent.window:

parent.window.postMessage("hello", "http://127.0.0.1");

JavaScript: Detect click outside of an element

If you want to implement some logic when a user clicks outside of an element (e.g. closing a dropdown), the most use solution is to add a click listener on both the document and the element and prevent propagation in the click handler of the element e.g.:

<html>
<head>
	<script src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
	<script>
	$(document).on('click', function(event) {
		alert("outside");
	});
	$(document).ready(function() {
		$('#div3').on('click', function() {
			return false;
		});
	});
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3"></div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

If you use other libraries with add their own click handlers, it might break some of the functionality when stopping event propagation (see this article for more info). In order to implement this functionality in a way which doesn’t mess with the event propagation, you only need a click handler on the document and check whether the click target is the element or one of its children e.g. using the jQuery function closest:

<html>
<head>
	<script src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
	<script>
	$(document).on('click', function(event) {
		if (!$(event.target).closest('#div3').length) {
			alert("outside");
		}
	});
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3"></div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

If you want to avoid jQuery and implement it in a pure JavaScript way with no additional dependencies, you can use addEventListener to had an event handler on the document and implement your own function to replace closest e.g.:

<html>
<head>
	<script>
	function findClosest (element, fn) {
		if (!element) return undefined;
		return fn(element) ? element : findClosest(element.parentElement, fn);
	}
	document.addEventListener("click", function(event) {
		var target = findClosest(event.target, function(el) {
			return el.id == 'div3'
		});
		if (!target) {
			alert("outside");
		}
	}, false);
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3">
		<div style="background-color:pink;width:50px;height:50px;" id="div6"></div>
	</div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

The findClosest function just checks whether the provided function returns true when applied to the element and if not recursively calls itself with the parent of the element as parameter until there no parent.

If instead of using the element id, you want to apply this to all elements having a given class, you can use this function as second argument when calling findClosest:

function(el) {
	return (" " + el.className + " ").replace(/[\n\t\r]/g, " ").indexOf(" someClass ") > -1
}