Failed to initialize the PowerShell host while installing packages

While trying to install the NEST Nuget package, I got the following error when the JSON.NET post-install powershell script was executed:

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first.

I then tried to update the execution policy by executing the following in a PowerShell opened as Administrator:

start-job { Set-ExecutionPolicy Unrestricted } -RunAs32 | wait-job | Receive-Job

Unfortunately, this failed with the following error message:

Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a
more specific scope. Due to the override, your shell will retain its current effective execution policy of
RemoteSigned. Type “Get-ExecutionPolicy -List” to view your execution policy settings. For more information please see
“Get-Help Set-ExecutionPolicy”.
+ CategoryInfo : PermissionDenied: (:) [Set-ExecutionPolicy], SecurityException
+ FullyQualifiedErrorId : ExecutionPolicyOverride,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand

I then tried to set it to RemoteSigned instead of Unrestricted:

start-job { Set-ExecutionPolicy RemoteSigned } -RunAs32 | wait-job | Receive-Jobb

This didn’t cause any error but even after restarting Visual Studio I was not able to install JSON.NET.

The only thing that worked was reinstalling the NuGet Package Manager for Visual Studio:

  1. In Tools -> Extensions and Updates, uninstall NuGet Package Manager for Visual Studio
  2. Restart Visual Studio
  3. In Tools -> Extensions and Updates, install NuGet Package Manager for Visual Studio
  4. Restart Visual Studio

And after I could install NEST including JSON.NET!

 

WP Advertize It 1.0

One of my first plugin was a plugin called WP Advertize It which allows you to define HTML/JavaScript code blocks and embed them in your site at different locations. The plugin itself is not limited to displaying ads but the reason why I created it was to embed ads in my site (so that it can pay for the maintenance costs by itself). I’ve been using this plugin in many of my sites over the years.

We’ve now released a new version of the plugin which among new functionality also comes with an AngularJS based settings page. This first makes the settings UI more responsive and also made it easier to build a more modern UI. Of course, one of the goals of UI AngularJS was to check how well it can integrate in a WordPress context but it also comes with some benefits.

If you’re interested in checking how this works, you can download the plugin in the WordPress plugin repository and check the code !

 

Cross-document communication with iframes

Using iframes (inline frames) is often considered bad practice since it can hurt you from a SEO point view (contents of the iframes will not be indexed by search engines). But whenever you have an application which doesn’t require indexing of contents (e.g. because the content is only visible after the user has been authenticated and authorized) or you need to embed content from other web sites/apps, iframes provide a nice mechanism to include content in your app and ensure that this doesn’t cause any major security issues.

Please refer to the MDN which contains a good description of iframes and a few examples.

Accessing an iframe and its content

The first step in building using iframes is of course to define an iframe tag in your HTML code which will define where in the DOM, the external resources will be taken over:

<iframe id="iframe1"></iframe>

Now you have added this tag in your HTML code, you will most probably want to access it with JavaScript to set a URL to be loaded, define how the iframe contents should be displayed (e.g. the width and height of the iframe) and maybe access some of the DOM elements in the iframe. This section will show you how this can be done.

Please keep in mind that things are relatively easy when working with iframes which contents are loaded from the same host/domain. If you work with contents from other hosts/domains, you’ll need to have a look at the next sections as well.

Setting the URL and styles of an iframe

Setting the URL which contents need to be loaded in the iframe, just means setting the src property of the iframe object. And styling it can be done by using the style property. Here’s a short example:

var iframe1 = document.getElementById('iframe1');
iframe1.style.height = '200px';
iframe1.style.width = '400px';
iframe1.src = 'iframe.html';

In this case the source for the iframe contents is an HTML page on the same host/domain but you could also define a complete URL pointing to another location.

Detecting when the iframe’s content has been loaded

Before you can access the contents of the iframe, you will have to wait for the iframe contents to be loaded (just like you should wait for the contents of your page to be fully loaded before accessing and manipulating them). This is done by defining an onload callback on the iframe:

iframe1.onload = function() {
    // your code here
}

Accessing the contents of the iframe

Once you’ve made sure that the iframe contents have been loaded, you can access it’s document using either the contentDocument property of the iframe object or by using the document property of the contentWindow property of the iframe. Of course, it’s just easier to use contentDocument. Unfortunately, it’s not supported by older versions of Internet Explorer so to make sure that it works in all browsers, you should check whether the contentDocument property exists and if not, revert to contentWindow.document:

var frameDocument = iframe1.contentDocument ? iframe1.contentDocument : iframe1.contentWindow.document;
var title = frameDocument.getElementsByTagName("h1")[0];
alert(title.textContent);

Interactions between iframe and parent

Now that you can load content in the iframe, define how it should be displayed and access its content, you might also need to go one step further and access the parent document (or the iframes properties) from the iframe itself.

Accessing the parent document

Just like we accessed the contents of the iframe from a script in the parent page, we can do the opposite (currently ignoring cross-domain issues) by using the document property of the parent object:

var title = parent.document.getElementsByTagName("h1")[0];
alert(title.textContent);

Accessing the iframe properties from the iframe

If you have some logic based on the styles of the iframe tag in the parent page (e.g. its width or height), you can use window.frameElement which will point you to the containing iframe object:

var iframe = window.frameElement;
var width = iframe.style.width;
alert(width);

Calling a JavaScript function defined in the iframe

You can call JavaScript functions defined in the iframe (and bound to its window) by using the contentWindow property of the iframe object e.g.:

iframe1.contentWindow.showDialog();

Calling a JavaScript function defined in the parent

Similarly, you can call a JavaScript function defined in the parent window by using the window property of the parent object e.g.:

parent.window.showDialog2();

Same Origin Policy

The Same Origin Policy is an important concept when using JavaScript to interact with iframes. This is basically a security policy enforced by your browser and preventing documents originating from different domains to access each other’s properties and methods.

What’s the same origin?

Two documents have the same origin, if they have the same URI scheme/protocol (e.g. http, https…), the same host/domain (e.g. google.com) and the same port number (e.g. 80 or 443).

So documents loaded from:

  • http://google.com and https://google.com do not have the same origin since they have different URI schemes (http vs https)
  • https://benohead.com and https://benohead.com:8080 do not have the same origin since they have port numbers (80 vs 8080)
  • https://benohead.com and https://www.benohead.com do not have the same origin since they have different hostnames (even if the document loaded from www.benohead.com would be the same if loaded from benohead.com)
  • http://kanban.benohead.com and https://benohead.com do not have the same origin since sub-domains also count as different domains/hosts

But documents loaded from URIs where other parts of the URI are different share the same origin e.g.:

  • https://benohead.com and https://benohead.com/path: folders are not part of the tuple identifying origins
  • https://benohead.com and http://user:password@benohead.com: username and password are not part of the tuple identifying origins
  • https://benohead.com and https://benohead.com/path?query: query parameters are not part of the tuple identifying origins
  • https://benohead.com and https://benohead.com/path#fragment: fragments are not part of the tuple identifying origins

Note that depending on your browser https://benohead.com and https://benohead.com:80 (explicitly stating the port number) might or might not be considered the same origin.

Limitations when working with different origins

A page inside an iframe is not allowed to access or modify the DOM of its parent and vice-versa unless both have the same origin. So putting it in a different way: document or script loaded from one origin is prevented from getting or setting properties of a document from another origin.

Interacting cross-domain

Of course, in most cases using iframes makes sense when you want to include contents from other domains and not only when you want to include contents from the same domain. Fortunately, there are a few options for handling this depending on the exact level of cross-domain interaction which is required.

URL fragment hack

What you would have done 5 to 10 years ago is workaround the limitation by using the fact that any window/iframe can set the URL of another one and that if you only change the fragment part of a URL (e.g. what’s after the hash sign #), the page doesn’t reload. So basically, this hack involves sending some data to another iframe/window, by getting a reference to this iframe/window (which is always possible), adding a fragment (or changing it) in order to pass some data (effectively using the fragment as a data container and setting the URL as a trigger event).

Using this hack comes with two main limitations:

  • This hack doesn’t seem to work anymore in some browsers (e.g. Safari and Opera) which will not allow child frame to change a parent frame’s location.
  • You’re limited to the possible size of fragment identifiers which depends on the browser limitation and on the size of the URL without fragment. So sending multiple kilobytes of data between iframes using this technique might prove difficult.
  • It may cause issues with the back button. But this is only a problem if you send a message to your parent window. If the communication only goes from your parent window to iframes or between iframes, then you won’t see the URL and bookmarking and the back button will not be a problem.

So I won’t go into more details as how to implement this hack since there are much better ways to handle it nowadays.

window.name

Another hack often used in the past in order to pass data from an iframe to the parent. Why window.name ? Because window.name persists accross page reloads and pages in other domains can read or change it.

Another advantage of window.name is that it’s very easy to use for storage:

window.name = '{ "id": 1, "name": "My name" }';

In order to use it for communicating with the parent window, you need to introduce some polling mechanism in the parent e.g.:

var oldName = iframe1.contentWindow.name;
var checkName = function() {
  if(iframe1.contentWindow.name != oldName) {
    alert("window name changed to "+iframe1.contentWindow.name);
    oldName = iframe1.contentWindow.name;
  }
}
setInterval(checkName, 1000);

This code will check every second whether the window.name on the iframe has changed and display it when it has.

You seem to be able to store up to 2MB of data in window.name. But keep in mind that window.name was never actually not designed for storing or exchanging data . So browser vendor support could be dropped at any time.

Server side proxy

Since the single origin policy is enforced by the browser a natural solution to work around it is to access the remote site from your server and not from the browser. In order to implement it, you’ll need a proxy service on your site which forwards requests to the remote site. Of course, you’ll have to limit the use of the server side proxy in order not to introduce an exploitable security hole.

A cheap implementation of such a mechanism, could be to use the modules mod_rewrite or mod_proxy for the Apache web server to pass requests from your server to some other server.

document.domain

If all you’re trying to do is have documents coming from different subdomains interact, you can set the domain which will be used by the browser to check the origin in both document using the following JavaScript code:

document.domain = "benohead.com";

You can only set the domain property of your documents to a suffix (i.e. parent domain) of the actual domain. So if you loaded your document from “kanban.benohead.com” you can set it to “benohead.com” but not to “google.com” or “hello.benohead.com” (although you wouldn’t need to set it to “hello.benohead.com” since you can set the domain to “benohead.com” for both windows/frames loaded from “kanban.benohead.com” and “hello.benohead.com”).

JSON with Padding (JSONP)

Although not directly related to the inter-domain and inter-frame communication, JSONP allows you to call a remote server and have it execute some JavaScript function define on your side.

The basic idea behind JSONP is that the script tag bypasses the same-origin policy. So you can call a server using JSONP and provice a callback method and the server will perform some logic and return a script which will call this callback method with some parameters. So basically, this doesn’t allow you to implement a push mechanism from the iframe (loaded from a different domain) but allows you to implement a pull mechanism with callbacks.

One of the main restrictions when using JSONP is that you are restricted to using GET requests.

Cross-Origin Resource Sharing (CORS)

CORS is a mechanism implemented as an extension of HTTP using additional headers in the HTTP requests and responses.

Except for simple scenarios where no extra step is required, in most cases enabling CORS means that an extra HTTP request is sent from the browser to the server:

  1. A preflight request is sent to query the CORS restrictions imposed by the server. The preflight request is required unless the request matches the following:
    • the request method is a simple method (i.e. GET, HEAD, or POST)
    • the only headers manually set are headers set automatically by the user agent (e.g. Connection and User-Agent) or one of the following: Accept, Accept-Language, Content-Language, Content-Type.
    • the Content-Type header is application/x-www-form-urlencoded, multipart/form-data or text/plain.
  2. The actual request is sent.

The preflight request is an OPTIONS request with an Origin HTTP header set to the domain that served the parent page. The response from the server is either an error page or an HTTP response containing an Access-Control-Allow-Origin header. The value of this header is either indicating which origin sites are allowed or a wildcard (i.e. “*”) that allows all domains.

Additional Request and Response Headers

The CORS specification defines 3 additional request headers and 6 additional response headers.

Request headers:

  • Origin defines where the CORS request comes from
  • Access-Control-Request-Method defines in the preflight request which request method will later be used in the actual request
  • Access-Control-Request-Headers defines in the preflight request which request headers will later be used in the actual request

Response headers:

  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials
  • Access-Control-Expose-Headers
  • Access-Control-Max-Age
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Headers

How does it work?

The basic CORS workflow with preflight requests looks like this:

CORS

The browser send an HTTP OPTIONS request to the remote server with the origin of the page and the request method to be used. The remote server responds with an allowed origin and allowed methods headers. The browser then proceeds with the actual HTTP request. If you want to use some additional headers, an Access-Control-Request-Headers will also be sent in the OPTIONS request and an Access-Control-Allow-Headers will be returned in the response. You can then use this additional header in the actual request.

CORS vs. JSONP

Although CORS is supported by most modern web browsers, JSONP works better with older browsers. JSONP only supports the GET request method, while CORS also supports other types of HTTP requests. CORS makes it easier to create a secure cross-domain environment (e.g. by allowing parsing of responses) while using JSONP can cause cross-site scripting (XSS) issues, in case the remote site is compromised. And using CORS makes it easier to provide good error handling on top of XMLHttpRequest.

Setting up CORS on the server

In order to allow CORS requests, you only have to configure the server to add the following header to its response:

Access-Control-Allow-Origin: *

Of course, instead of a star, you can also return a single origin (e.g. benohead.com) or using a wildcard in the origin (e.g. *.benohead.com). This header can also contain a space separated list of origins. In practice, maintaining an exhaustive list of all allowed origins might be difficult, so in most cases you’ll either have a star, a single origin or a single origin and an origin containing a star e.g. benohead.com *.benohead.com. If you want to support a specific list of origins, you’ll have to have the web server check whether the provided origin is in a given list of allowed origins and return this one origin in the response to the HTTP call.

And if the requests to the web servers will also contain credentials, you need to configure the web server to also return the following header:

Access-Control-Allow-Credentials: true

If you are expecting not only simple requests but also preflight requests (HTTP OPTIONS), you will also need to set the Access-Control-Allow-Methods header in the response to the browser. It only needs to contains the method requested in the Access-Control-Request-Method header of the request. But usually, the complete list of allowed methods is sent back e.g.:

Access-Control-Allow-Methods: POST, GET, OPTIONS

Security and CORS

CORS in itself is not providing with the means to secure your site. It just helps you defining how the browsers should be handling access to cross-domain resource (i.e. cross-domain access). But since it relies on having the browser enforce the CORS policies, you need to have an additional security layer taking care of authentication and authorization.

In order to work with credential, you have set the withCredentials property to true in your XMLHttpRequest and the server needs put an additional header in the response:

Access-Control-Allow-Credentials: true

HTML5 postMessage

Nowadays, the best solution for direct communication between a parent page and an iframe is using the postMessage method available with HTML5. Using postMessage, you can send a message from one side to the other. The message contains some data and an origin. The receiver can then implement different behaviors based on the origin (also note that the browser will also check that the provided origin makes sense).

parent to iframe

In order to send from the parent to the iframe, the parent only has to call the postMessage function on the contentWindow of the iframe object:

iframe1.contentWindow.postMessage("hello", "http://127.0.0.1");

On the iframe side, you have a little bit more work. You need to define a handler function which will receive the message and register it as an event listener on the window object e.g.:

function displayMessage (evt) {
	alert("I got " + evt.data + " from " + evt.origin);
}

if (window.addEventListener) {
	window.addEventListener("message", displayMessage, false);
}
else {
	window.attachEvent("onmessage", displayMessage);
}

The parameter to the handler function is an event containing both the origin of the call and the data. Typically, you’d check whether you’re expecting a message from this origin and log or display an error if not.

iframe to parent

Sending messages in the other direction works in the same way. The only difference is that you call postMessage on parent.window:

parent.window.postMessage("hello", "http://127.0.0.1");

JavaScript: Detect click outside of an element

If you want to implement some logic when a user clicks outside of an element (e.g. closing a dropdown), the most use solution is to add a click listener on both the document and the element and prevent propagation in the click handler of the element e.g.:

<html>
<head>
	<script src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
	<script>
	$(document).on('click', function(event) {
		alert("outside");
	});
	$(document).ready(function() {
		$('#div3').on('click', function() {
			return false;
		});
	});
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3"></div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

If you use other libraries with add their own click handlers, it might break some of the functionality when stopping event propagation (see this article for more info). In order to implement this functionality in a way which doesn’t mess with the event propagation, you only need a click handler on the document and check whether the click target is the element or one of its children e.g. using the jQuery function closest:

<html>
<head>
	<script src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
	<script>
	$(document).on('click', function(event) {
		if (!$(event.target).closest('#div3').length) {
			alert("outside");
		}
	});
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3"></div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

If you want to avoid jQuery and implement it in a pure JavaScript way with no additional dependencies, you can use addEventListener to had an event handler on the document and implement your own function to replace closest e.g.:

<html>
<head>
	<script>
	function findClosest (element, fn) {
		if (!element) return undefined;
		return fn(element) ? element : findClosest(element.parentElement, fn);
	}
	document.addEventListener("click", function(event) {
		var target = findClosest(event.target, function(el) {
			return el.id == 'div3'
		});
		if (!target) {
			alert("outside");
		}
	}, false);
	</script>
</head>
<body>
	<div style="background-color:blue;width:100px;height:100px;" id="div1"></div>
	<div style="background-color:red;width:100px;height:100px;" id="div2"></div>
	<div style="background-color:green;width:100px;height:100px;" id="div3">
		<div style="background-color:pink;width:50px;height:50px;" id="div6"></div>
	</div>
	<div style="background-color:yellow;width:100px;height:100px;" id="div4"></div>
	<div style="background-color:grey;width:100px;height:100px;" id="div5"></div>
</body>
</html>

The findClosest function just checks whether the provided function returns true when applied to the element and if not recursively calls itself with the parent of the element as parameter until there no parent.

If instead of using the element id, you want to apply this to all elements having a given class, you can use this function as second argument when calling findClosest:

function(el) {
	return (" " + el.className + " ").replace(/[\n\t\r]/g, " ").indexOf(" someClass ") > -1
}

 

Error Emitted => SELF_SIGNED_CERT_IN_CHAIN

While using elasticdump to dump an elasticsearch index to a JSON file, I got the following error message:

Error Emitted => SELF_SIGNED_CERT_IN_CHAIN

This basically means that we are accessing elasticsearch using an HTTPS connection and the certificate it gets is self-signed (and thus cannot be verified).

Googling for this issue I mostly found questions from people who get this error message when using NPM. But all answers were basically aimed at making it work in NPM by disabling the strict SSL rule in the NPM config (npm config set strict-ssl false), setting an HTTP URL as HTTPS proxy (npm config set https-proxy “http://:8080”), using an HTTP URL for the registry (npm config set registry=”http://registry.npmjs.org/”) or having npm use known registrars (npm config set ca=””).

But none of this could help me since I do not have an issue using NPM but using another Node application. The only think I eventually found was to set an environment variable so that NodeJS would not reject self signed certificates:

export NODE_TLS_REJECT_UNAUTHORIZED=0

After that elasticdump was working fine. But keep in mind that using this method for temporarily using a NodejS software is fine, but using this setting in production is not a good idea.

Update for Windows users:

On Windows use the following:

set NODE_TLS_REJECT_UNAUTHORIZED=0

 

C#: Understanding CLOSE_WAIT and FIN_WAIT_2

Here’s a short C# program which can be used to better understand what the TCP states CLOSE_WAIT and FIN_WAIT_2 are and why you sometimes see connections stuck in these states:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.NetworkInformation;
using System.Net.Sockets;

namespace TcpTester
{
    internal static class Program
    {
        private const int Port = 15000;
        private const string Hostname = "127.0.0.1";

        private static void Main(string[] args)
        {
            if (args.Length > 0 && args[0] == "client")
            {
                // Started in client mode
                var tcpClient = new TcpClient();
                tcpClient.Connect(Hostname, Port);
                Console.WriteLine("Connected to {0}:{1}", Hostname, Port);
                PrintState();
                Console.WriteLine("Press any key to close the connection from this side.");
                Console.ReadKey();
                tcpClient.Close();
                PrintState();
            }
            else
            {
                // Started in server mode
                var tcpListener = new TcpListener(IPAddress.Parse(Hostname), Port);
                tcpListener.Start();
                Console.WriteLine("Listening on {0}:{1}", Hostname, Port);
                TcpClient tcpClient = tcpListener.AcceptTcpClient();
                tcpListener.Stop();
                Console.WriteLine("Client connected on {0}:{1}", Hostname, Port);
                PrintState();
                Console.WriteLine("Press any key to close the connection from this side.");
                Console.ReadKey();
                tcpClient.Close();
                PrintState();
            }
        }

        private static void PrintState()
        {
            IEnumerable<TcpConnectionInformation> activeTcpConnections =
                IPGlobalProperties.GetIPGlobalProperties().GetActiveTcpConnections()
                    .Where(c => c.LocalEndPoint.Port == Port || c.RemoteEndPoint.Port == Port);
            foreach (TcpConnectionInformation connection in activeTcpConnections)
            {
                Console.WriteLine("{0} {1} {2}", connection.LocalEndPoint, connection.RemoteEndPoint, connection.State);
            }
        }
    }
}

You can start the program without parameters to start a server and with the parameter “client” to start a client (guess it was kind of obvious…).

The server listens to 127.0.01:15000 and the clients connects to it. First start the server. The following will be written to the console:

Listening on 127.0.0.1:15000

Then start the client in another window. The following will appear in the client window:

Connected to 127.0.0.1:15000
127.0.0.1:15000 127.0.0.1:57663 Established
127.0.0.1:57663 127.0.0.1:15000 Established

This tells you that the client is connected from port 57663 (this will change every time you run this test) to port 15000 (where the server is listening).

In the server window, you will see that it got a client connection and the same information regarding port and connection states.

Then press any key on the server console and the following will be displayed:

127.0.0.1:15000 127.0.0.1:57663 FinWait2
127.0.0.1:57663 127.0.0.1:15000 CloseWait

So once the server closed the connection, the connection on the server side went to FIN_WAIT_2 and the one on the client side went to CLOSE_WAIT.

Then press any key in the client console to get the following displayed:

127.0.0.1:15000 127.0.0.1:57663 TimeWait

The connection will stay in TIME_WAIT state for some time. If you really wait a long time before pressing a key in the client console, this last line will not be displayed at all.

So, this should make it easier to understand what the TCP states CLOSE_WAIT and FIN_WAIT_2 are: When the connection has been closed locally but not yet remotely, the local connection is the state FIN_WAIT and the remote one in CLOSE_WAIT.

For more details about the different TCP states, please refer to TCP: About FIN_WAIT_2, TIME_WAIT and CLOSE_WAIT.

OWIN: Serving static files from an external directory

I am working on an application with a self-hosted OWIN server where the UI is running in an embedded browser and the backend part of the application is implemented using WebApi. When I generate files in the backend, I store them in a subfolder of the application (called “uploads”) configure my application so that files from this folder are served statically:

appBuilder.UseStaticFiles("/uploads");

It all worked fine until an installer was created for the application which installed it in c:\Program Files. Unfortunately, the application is not able to write to the uploads subfolder, so it broke this type of functionality. Obviously, the solution is to be a good Windows citizen and store files created by the application in the LocalAppData directory e.g. instead of using:

Path.Combine(AppDomain.CurrentDomain.SetupInformation.ApplicationBase, "uploads")

use:

Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), @"MyCompany\uploads")

This solves the issue writing to the folder. All that is now missing is to tell OWIN to serve files from this folder whether the “/uploads” virtual path is accessed:

var staticFilesOptions = new StaticFileOptions();
staticFilesOptions.RequestPath = new PathString("/uploads");
var uploadPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), @"MyCompany\uploads");
Directory.CreateDirectory(uploadPath);
staticFilesOptions.FileSystem = new PhysicalFileSystem(uploadPath);
appBuilder.UseStaticFiles(staticFilesOptions);

Note that the folder needs to exist before you use UseStaticFiles hence the Directory.CreateDirectory call.

Asynchronously pre-loading scripts with AngularJS and RequireJS

As your web application grows, the number and size of JavaScript files you will have to load grows as well. If you are using AngularJS and RequireJS, you might well reach a level where the initial loading of such resources takes so long that you need to start looking into better ways to handle loading these dependencies.

Initial Loading

Your starting point when working with AngularJS is that everything is loaded in the beginning:

initial loading

The advantages of this approach are that it’s very easy to handle and switching to the second or third view is extremely fast as everything was already loaded upfront.

Lazy loading

As described in this article, lazy loading can help reducing the initial loading time by loading resources on demand when the users moves to another view:

lazy loading

So with this approach the time required to switch from the initial view to another view is increasing (because you now need to load some resources first) but the initial load time is decreasing.

Asynchronous pre-loading

In order to get a low initial loading time and not require any loading time for the second and third view, you need to implement some asynchronous solution which relies on the fact that files which have already been loaded will not be loaded again and that the user usually doesn’t immediately switch to the second or third view.

The delay introduced by the user could be because he needs to enter some credential in a login page. Or the initial view is some kind of dashboard and the user will first review all displayed information before going to more detailed views.

asynchronous preloading

When the initial view is displayed, we require scripts needed for the other two views to be loaded asynchronously. So while the user is interacting with the initial view, the scripts are loaded in the background and once the user activates one of the other views, the scripts will not be loaded again.

Asynchronous pre-loading with AngularJS and RequireJS

In order to load JavaScript files asynchronously in the background, we’ll need to use the async version of require:

require(["module_name"])

There are basically 3 places where you could trigger this:

  1. Immediately
  2. In the resolve function of your route definition
  3. In the require callback of your view

The problem with the first option is that since it triggers the asynchronous background loading of the files immediately, these load operations will compete with the loading of other resources you are waiting for. This means that it will use some of the bandwidth you need for the synchronous loading of files and it will also use HTTP connections which might cause the other load operations to have to wait.

If you are dynamically loading files for the new view in your resolve function, putting the code to asynchronously load files you will need in the future has the same effect. It’s just maybe not that bad because the files which are lazy loaded might be less or smaller than the ones which are required for all views and loaded upfront.

So the solution I went for is the third. Since the background loading is triggered once the required files for the view have been loaded, it has no impact on other load operations.

I’ve updated the pluggableViews provider I’ve already described in a previous post to have an additional optional parameter called preload. It’s basically a function called in the callback of the require function call during lazy loading of the view. The default value for this function is an empty function:

if (!viewConfig.preload) {
    viewConfig.preload = function () {
    };
}

And it is called in the callback of the require function:

$routeProvider.when(viewConfig.viewUrl, {
	templateUrl: viewConfig.templateUrl,
	controller: viewConfig.controller,
	resolve: {
		resolver: ['$q', '$timeout', function ($q, $timeout) {

			var deferred = $q.defer();
			if (angular.element("#" + viewConfig.cssId).length === 0) {
				var link = document.createElement('link');
				link.id = viewConfig.cssId;
				link.rel = "stylesheet";
				link.type = "text/css";
				link.href = viewConfig.cssUrl;
				angular.element('head').append(link);
			}
			require([viewConfig.requirejsName], function () {
				pluggableViews.registerModule(viewConfig.moduleName);
				$timeout(function () {
					deferred.resolve();
					viewConfig.preload();
				});
			});
			return deferred.promise;
		}]
	}
});

I can then use it this way:

$pluggableViewsProvider.registerView({
	ID: 'walls',
	moduleName: "cards.walls",
	requirejsConfig: {paths: {'walls': '../views/walls/walls'}},
	preload: function () {
		require(['reports'], function () {
			console.log("reports loaded");
		});
		require(['admin'], function () {
			console.log("admin loaded");
		});
	}
});

Conclusion

Of course, introducing lazy loading and asynchronous background pre-loading increases the complexity of your application. And it is not trivial to introduce once you already have a large application which dependencies are a mess because you never need to clean them before (since by default AngularJS causes all files to be loaded upfront).

But if you load lots of files (especially JavaScript libraries), then making sure that files are loaded when needed and are not loaded at a point in time where this would slow down the application, will definitely help making your application look thinner and faster (from a user perspective).

Blocking all BlazingFast IP address blocks (ranges)

Over the past few weeks, I’ve had some issues with my site sometimes not being available or loading very slowly. Checking on the server I could see a high number of Apache processes and a memory usage about 5GB higher than usual. Issuing a netstat I could see that there were many connections from the same IP address: 185.62.189.162.

A whois on this address shows that this IP address belongs to a hosting company in Kiev, Ukraine called BlazingFast. I first blocked this IP address using iptables:

/sbin/iptables -A INPUT -s 185.62.189.162  -j DROP

Since I have a monitoring script checking intrusion attempts and blocking IP addresses, I end up having lots of DROP rules in iptables. So once a week I clean them automatically. Usually hackers do not spend more than a week trying if they see that their traffic to my server is blocked anyway.

Here it was different. As soon as the rules where cleared, it started again with the exact same address. Of course, I immediately blocked this IP address again and sent an email to their abuse email address. But as expected never got an answer. Instead, the same thing happened again but coming from another similar IP address: 185.62.190.221. Whois shows that this address also belongs to the same Ukrainian hosting company.

So, since it was now clear that I’ll keep having problems with IP addresses belonging to this company, I decided to block all traffic coming for the IP ranges owned by them. First I checked what was their ASN on https://who.is/whois-ip/ip-address/185.62.190.221: AS60033. Then looked up their IP address blocks on https://ipinfo.io/AS60033.

Then all I had to do is use iptables to block traffic from these IP address blocks (and make sure that these rules stay in there):

/sbin/iptables -A INPUT -s 185.11.144.0/22 -j DROP
/sbin/iptables -A INPUT -s 185.11.145.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.11.146.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.11.147.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.61.136.0/22 -j DROP
/sbin/iptables -A INPUT -s 185.62.188.0/23 -j DROP
/sbin/iptables -A INPUT -s 185.62.188.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.62.189.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.62.190.0/23 -j DROP
/sbin/iptables -A INPUT -s 185.62.190.0/24 -j DROP
/sbin/iptables -A INPUT -s 185.62.191.0/24 -j DROP
/sbin/iptables -A INPUT -s 188.209.48.0/23 -j DROP
/sbin/iptables -A INPUT -s 188.209.48.0/24 -j DROP
/sbin/iptables -A INPUT -s 188.209.49.0/24 -j DROP
/sbin/iptables -A INPUT -s 188.209.50.0/23 -j DROP
/sbin/iptables -A INPUT -s 188.209.52.0/23 -j DROP
/sbin/iptables -A INPUT -s 188.209.52.0/24 -j DROP
/sbin/iptables -A INPUT -s 188.209.53.0/24 -j DROP

So now the load on the server is fine again and unlike the past few weeks the hosted websites are always accessible and load fast.

It’s interesting to see that BlazingFast is advertizing with the DDOS protection service on hand and actually seem to have customers performing brute force attacks from their servers on the other. If you look up their ASN on the fail2ban reporting service, you will see that a few of their IP addresses are being blocked. So I am not the only one who’s been hit by this. Maybe they should not only focus on protecting their customers from DDOS attacks but should also prevent them from performing attacks.

This post on stackexchange also shows that it’s not something new but it looks like there were already attacks originating from one of their IP addresses in May. The answers to this post will also give you some alternative solutions to block them using the Apache .htaccess file, the Cisco firewall, Nginx, a Microsoft IIS Web Server rule, netsh ADVFirewall or CSF firewall.

I know it’s more difficult to identify attacks originating from one of your IP addresses than attacks targeting your network. As a hosting company, you definitely do not want to have to many false positive and block legitimate traffic created by your customers. But I’m still pretty mad having to waste so much time taking care of this kind of things…

Update 18/07/2015: I’ve ended up also blocking all IP blocks of the following companies: ISPsystem, cjsc and Lekosport-Kharkov LLC which were wasting my time and there’s trying to hack wp-login.php.

Update 20/07/2015: Today I’ve blocked additional IP blocks belonging to Kyivstar PJSC. Slowly I’m starting to think that I’ll have to block access to complete regions in order to be able to sleep at night without worrying…

About lazy loading AngularJS modules

I recently wrote a post about creating pluggable AngularJS views using lazy loading of Angular modules. After discussing this topic with colleagues, I realized I did provide a technical solution but not much background as to why it is useful, what should be lazy loaded and when does lazy loading make sense. Hence this post…

Does lazy loading always make sense?

First, as general rule, optimization should only be performed when you actually have a performance problem. Definitely not in a prophylactic way. This is a common error made by many software engineers and causing complexity to unnecessarily increase at the beginning of a project. If you identify a bottleneck while loading specific resources, you should start thinking about introducing lazy loading in order to speed up your app e.g. during initial loading.

But just lazy loading everything without first having a good reason to introduce it can very well result in worse performance. You need to understand that lazy loading could actually increase the overall cost of loading resources. By loading all resources upfront, you get a chance to combine your resources and reduce the number of HTTP requests to your server. By splitting the loading of resources, you then actually have less possibilities to reduce the overall resource consumption by combination. Especially when you are loading many small resources, the overhead of loading them individually increases. Since JavaScript code is relatively small compared to static assets like images, lazy loading could cause a higher request overhead.

Also loading resources on demand means that when the user of your application activates a view which has not yet been loaded, he first has to wait until the additional resources are lazy loaded and properly registered. When you load everything upfront, the user has a longer wait time on application load time but can immediately access all views in your application without delay.

So using lazy loading always means having a tradeoff between initial load time of the application and subsequent activation of parts of your application.

RequireJS vs. AngularJS Dependency Injection

RequireJS is currently the state of the art in modular script loader and asynchronous and lazy loading of JavaScript files. It is based upon the AMD (Asynchronous Module Definition) API. It handles loading your script files in the browser based on a description of dependencies between your modules.

This may sound similar to AngularJS dependency injection mechanism. But RequireJS and AngularJS DI work on two completely different levels. AngularJS DI handles runtime artifacts (AngularJS components). In order to work properly, AngularJS DI requires all of the JavaScript code to have been loaded and registered by the framework. It injects controllers, directives, factories, services, etc. which have been previously loaded.

Since there is no direct integration of RequireJS and AngularJS and AngularJS has a single initialization phase where the definition of modules is interpreted, when using both of them, RequireJS will need to first load the complete dependency tree before AngularJS DI can be used. This effectively means that your complete JavaScript code will need to be fetched on initial load.

So we cannot just lazy load all AngularJS modules JavaScript files with RequireJS after an AngularJS application has started because the AngularJS DI container wouldn’t handle the components created as a result of loading the JavaScript files. Instead, you need to manually handle all these components which have been loaded after the startup phase of your application.

What’s the point of lazy loading ?

Lazy loading components generally improves an application’s load time. If your web application takes many seconds to load all components ever required to interact with user, you will most probably experience a high bounce rate with users just giving up before they even get a chance to see how great your application is.

By lazy loading parts of your application, you can make sure that the views in your application which are used immediately by most users are available immediately. If a given user decides to use other (e.g. more advanced views), the application starts loading required components on demand.

Especially if you are building a web application which can be used on phone and tablets (which represents about 60% of the total web traffic now), you have to consider that most users will not have a 4G mobile connection and initial load times can become prohibitive in an environment where the download bandwidth is limited.

Loading time is a major contributing factor to page abandonment. The average user has no patience for a page to take too long to load. Slower page response time results in an increase in page abandonment. Nearly half of web users expect a site to load in 2 seconds or less, and they tend to abandon a site that isn’t loaded within 3 seconds.

So improving the initial load time of your web application is critical. And this is the main use case for lazy loading.

When and what to lazy load ?

As explained above lazy loading makes sense when you want to reduce the initial load times and are ready to accept that loading additional views might not be instantaneous. Ideally, at some point in time during the run time of your application, you would be able to determine that the user will need a specific view and load it asynchronously in the background. Unfortunately, this kind of smart lazy loading is very difficult to implement for two reasons. First, it is not so easy to predict that a user will need a specific view. Second, asynchronicity introduces additional problems which increase the complexity of your application.

This is why lazy loading is mostly implemented in such a way that when a user activates a view (or a sub-part) of your application which hasn’t yet been loaded, it is loaded on the fly before displaying it.

A loading mechanism can be implemented on different levels. Having a fine granular lazy loading mechanism, reduces the maximum wait time for the user when something is loaded. But the complexity of your application grows (potentially exponentially) as well, the more fine granular it is. Since our general rule is not to optimize upfront but only find a solution to problems you are actually facing, this requires a strategy along these lines:

  1. First load everything upfront (this is how most desktop applications work). If you face problems because of high initial load time, continue optimize. Otherwise you are done.
  2. Lazy load individual modules of the application. This is the way I handle lazy loading in the application I used as a basis for my previous post. If the load times are acceptable, then stop optimizing. “Acceptable” could either mean that the load times for all parts of the application are good enough or that the load times are good for 95% of the use cases and the user only has a longer wait time for rare use cases.
  3. Keep reducing the granularity of lazy loaded resources…
  4. Until the performance is acceptable.

AngularJS components vs. Module loading

If you google for AngularJS lazy loading, you will find many resources. Most of them teach you how to lazy load controllers, directives, etc. The difference between these approaches and i.e. the one described in my previous post is basically that when you just lazy load controllers and such, you have a single module which is initialized at the beginning and for which you register additional components on the fly. This approach has two drawbacks:

  1. You cannot organize your application in multiple AngularJS modules.
  2. This doesn’t work well for third-party party AngularJS modules.

Both of these drawbacks have the same root cause. Since AngularJS does not only rely on JavaScript files to be loaded but also need to have the modules properly registered in order to inject them using DI, if this step is missing because the files were loaded later on, new AngularJS modules will not be available.

That’s why you need to additionally handle the invoke queue, config blocks and run blocks when lazy loading new AngularJS modules.

Also note that whenever the term “module” is used in this context, it can mean two different things:

  1. AMD module as used in RequireJS. These modules just encapsulate a piece of code which has load dependencies to other modules. These are designed for asynchronous loading.
  2. AngularJS modules. These modules are basically containers for controllers, services, filters, directives…

When I reference modules in this article, I mean the second kind of modules.

Conclusion

I hope that with this article I made it clearer, why lazy loading AngularJS modules is not a stupid idea but should be handled carefully. You need to make sure that you choose the right level on which to lazy load components. And if you need to split your application in multiple modules or use third-party modules, it is definitely not sufficient to use most of the mechanisms you will find by quickly googling for lazy loading in AngularJS. Using RequireJS is definitely a first step in the right direction but you need to make sure that the loaded scripts are also made available to the AngularJS dependency injection container.