OAuth 2.0 / OpenID Connect Explained

OAuth and OpenID Connect

OAuth (Open Authorization) is an open standard for API access delegation. Put simply, it’s a secure authorization protocols used to grant applications access to protected resources without exposing credentials.

OpenID Connect (OIDC) is an authentication layer (i.e. an identity layer) on top of OAuth 2.0. Client applications can use it to verify the identity of a subject (usually a user) based on the authentication performed by an authorization Server. It also provides basic profile information.O

Authentication

Authentication is the process of identifying an individual e.g. using a username and password. This basically involves checking whether a user exists and determining who this user is i.e. associating credentials with an identity. Or in other words authentication answers the question “who one is”. OAuth is a specification for authorization and is not an authentication protocol but is used as a basis for authentication protocols like OpenID Connect.

Authorization

Authorization is the process of giving a subject permissions to access resources in a certain way.

OAuth 2.0 Roles

OAuth defines the following roles:

  • Resource Owner
  • Client Application
  • Resource Server
  • Authorization Server

Resource Owner

The resource owner is an entity which can grant a client application a scoped access to a resource. It can be a person (usually the end-user) but can also be a machine.

Client Application

A client application is an application which accesses protected resources on the behalf of the resource owner.

Resource Server

The resource server manages access to a protected resource, allowing access based on access tokens. It is also often called an API server. The protected resources can e.g. be the profile or personal information of a user.

Authorization Server

The authorization server issues access tokens to authenticated client applications when permissions for the access are granted by the resource owner. This is the “OAuth2” server.

OAuth 2.0 Role Interactions

OAuth 2.0 Interactions

Tokens

There are 3 types of tokens used when working with OAuth2 and OpenID Connect. Additionally, an authorization code is also defined.

  • The ID token is defined in OpenID Connect on top of the tokens defined in OAuth2
  • The access token is the main token defined in OAuth2
  • The refresh token is used, well, to refresh a token
  • The authorization code is not a token in itself but can be used to get an access token

 

ID Tokens

ID Tokens are JSON Web Tokent (JWT) introduced by OpenID Connect. These tokens identify a user and contain user’s authentication information. They contain three parts:

  • a header
  • a body
  • a signature

In the token, each of these parts is encoded using Base64Url and concatenated using a “.” (dot) between them:

Base64Url(header) + “.” + Base64Url(body) + “.” + Base64Url(signature)

The body of the token contains a series of claims which provide data about the subject being identified by the token. The OpenID Connect specification doesn’t specify which claims have to be present in which context but does define “standard” claims (with registered claim names) and allows the use of custom claims. The registered claim names are:

  • iss: it’s the issuer of the token.
  • sub: this claim identifies the subject for which the token was issued. It is at least unique within the scope of an issuer.
  • aud: this claim identifies the audience for this token. This is the intended recipient(s) for this token. If an entity is processing this token does not identify itself as part of this audience, the token will be rejected.
  • exp: this is the expiration time of the token. An entity processing this token should reject it once the expiration time is reached.
  • nbf: this is the “not before” claim i.e. it defines at which point in time this token will be valid from. An entity processing this token before this point in time should reject it.
  • jti: this is an identifier for this particular token which should be unique per issuers and only have a low probably of not being unique globally.

Some claims are always added to the ID token by the authentication server and some claims depend on the scopes requested by the entity requesting the authentication.

Since the ID tokens contain privacy relevant data about subjects being identified, they should be kept confidential and access tokens should rather be used to access resources i.e. use the ID tokens to identify subjects and get data/metadata regarding the subject and access tokens to access resources / APIs.

This is how an ID token could look like:

{
  "https://benohead.com/country": "Germany",
  "https://benohead.com/timezone": "Europe/Berlin",
  "email": "henri.benoit@gmail.com",
  "email_verified": true,
  "iss": "https://auth.benohead.com/",
  "sub": "5b1789917b944931f4021e61",
  "aud": "q580zCRvynSeIg3uXCChcSRlYtNKSUPe",
  "iat": 1530800423,
  "exp": 1530836423
}

Access Tokens

Access tokens allow a client application to access a protected resource and defines the scope of this allowed access. This token is provided by the client application to the resource server when accessing the resource e.g. as an HTTP header. Just like the ID token, the access token has a limited lifetime which is defined when the authorization server issues the token to the client application. Since it allows access to protected resources, it must be kept confidential as much as possible although this is not always possible especially when a web browser is involved.

This is how an access token could look like:

{
  "iss": "https://auth.benohead.com/",
  "sub": "5b3cd37722c8f80eecd338e5",
  "aud": "https://benohead/api/dummy/",
  "iat": 1530802106,
  "exp": 1530888506,
  "scope": "read update delete create"
}

Note that in case the authorization server is also the resource provider (so if the audience matches the authorization server), you might also get an opaque access token which is just a string without any further meaning and which cannot be decoded. The authorization server can then map this string to permission on its own protected resources.

Refresh Tokens

Refresh Token are tokens containing information required to obtain a new ID token or access token. It is usually used to get a new token after the previous one expires. It is usually more secure to have short lived access tokens combined with refresh tokens, since it allows the authorization server to refuse to issue a new access token based on the refresh token in case the token has been compromised but still allow renewing the token in case access to a resource is required for a longer time. Without refresh token (and with long lived access tokens) a resource provider would need to query the authorization server to see whether the long lived access token is still valid. You can skip it when using short lived access tokens.

As an alternative to using refresh tokens, you could get a new access token with credentials every time a short lived access token expires. But there has a drawback: If you get the access token with user credentials (additionally to client credentials), you will need these user credentials every time your token expires (while you wouldn’t need the user credentials, just the client credentials, when using a refresh token).

Authorization Codes

Authorization codes are codes returned to unsecure clients. These client can then exchange them with an access and/or ID token in a more secure way.

They are used in the Authorization Code Grant Flow which is a flow where the client is typically a browser which receives an authorization code from the authorization server and sends this to the web application which then interacts with the authorization server in the back-end to exchange the authorization code for an access token, a refresh token and/or ID token.

Scopes and Audiences

Scopes and audiences are used to handle multiple resource servers and multiple types of access permissions.

Audience

The JWT aud (Audience) Claim identifies the intended recipients of the token i.e. it allows an authorization server to issue tokens that are only valid for certain purposes.

An audience claim can either contain a list of strings (i.e. multiple audiences) or it can be a single string (i.e. there is only one intended audience). It does not matter if an audience value is a URL or some other application specific string.

Each recipient of such a token must validate that the audience specified in the token matches its own audience name. It must then reject any token that does not contain its own audience name in the intended audience. The authorization server which issues the token can only validate whether a token for this audience can be issued. It is the responsibility is the resource server to determine whether the token should be used or not. A resource server may choose to ignore the audience claim and accept any valid token.

So the audience claim is only useful if you want to issue tokens with different purposes (i.e. intended audiences) and if at least some of the APIs (resource servers) you are using are validating the audience claim.

Scopes

Scopes provide a way to limit the access to functionality provided by the resource servers. The client can request scopes to be provided in the issued access token. The authorization server can provide all requested scopes, some of them or even additional ones based on its internal rules/policies. The provided scopes are written in the scope claim in the access token.

So they basically act as permissions. Authorization servers also use them when getting the consent from the user (i.e. ask the user whether he really wants to provide the client application access to these specific scopes).

When the client then uses the issued access token to request access to a protected resource, the resource server will validate that the type of resource and the type of access match the scopes contained in the access token.

E.g. if your access token contains the scope claim “read:posts read:comments write:comments”, the resource server would allow an application presenting this token to have read access to the posts and read and write comments but not to create a new post.

OpenID Connect Scopes

The following scopes are defined in OpenID Connect:

  • openid: this is the basic OpenID scope requesting to return the sub claim uniquely identifying the user and which can be used in combination with the scope values below.
  • profile: requests the authorization server to provide access to the user’s profile claims: name, family_name, given_name, middle_name, nickname, preferred_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated_at.
  • email: requests the authorization server to provide access to the email and email_verified claims.
  • address: requests the authorization server to provide access to the address claim.
  • phone: requests the authorization server to provide access to the phone_number and phone_number_verified claims.

OpenID Connect Endpoints

OpenID Connect defines 2 endpoints which can be used to request one or more of the token types described above:

  • The Authorization Endpoint is usually an endpoint accessible with the URL <authorizationserver>/login or <authorizationserver>/authorize
  • The Token Endpoint is usually an endpoint accessible with the URL <authorizationserver>/token

In some of the flows described below (in the ones not requiring an authentication code), you might be connecting to either the authorization endpoint or the token endpoint (depending on you authorization server).

Additionally, it also defines the UserInfo endpoint  which returns claims about the authenticated user.

The Authorization Endpoint

The authorization endpoint is an endpoint the user is sent to in order to:

  • get authenticated by any supported method e.g. with a username and password, an existing session cookie or a federated identity provider such as a social login, SAML provider or ADFS integrated system
  • issue a token to the client application (or deny it). Optionally, it will request a user consent (or issue a consent implicitly by using an internal policy)

The type of token being issued depends on the requested response type.

The Token Endpoint

The token endpoint can exchange a grant (e.g. an authorization codeprovided by the authorization endpoint) for a token. The grant supported as input are usually:

  • An authorization code
  • Client credentials i.e. a client ID and a client secret
  • A username and password
  • A refresh token

The UserInfo Endpoint

The UserInfo endpoint returns claims about the authenticated user. This is an OAuth 2.0 protected resource so you need to provide an access token to access it.

Here is a sample response from this endpoint:

{
	"sub": "1761101158623",
	"name": "Henri Benoit",
	"given_name": "Henri",
	"family_name": "Benoit",
	"preferred_username": "benohead",
	"email": "henri.benoit@gmail.com",
	"picture": "https://secure.gravatar.com/avatar/bde998b4b8e4b4fb259d80b5ac05d63d"
}

OpenID Connect Response Types

OpenID Connect defines an additional URL parameter called response_type. With this parameter, you can request different types of tokens to be returned. You can combine them by separating them by a space e.g. response_type=code id_token token.

  • id_token: if this response type is specified, the authorization server will return an ID token
  • token: if this response type is specified, the authorization server will return an access token
  • code: if this response type is specified, the authorization server will return an authorization code

Response type: code id_token token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization Yes Yes Yes
Token Yes Yes No

Note that the Token endpoint will never return an authorization code since it is an input for the token endpoint when the authorization code grant is used.

Response type: id_token token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization Yes Yes No
Token Not used Not used Not used

Note that the Token endpoint is not used in such cases.

Response type: code id_token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization Yes No Yes
Token Yes Yes No

Note that the Token endpoint will never return an authorization code since it is an input for the token endpoint when the authorization code grant is used.

Response type: code token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization No Yes Yes
Token Yes/No Yes No

The token endpoint will only return an ID token if scope openid is requested.

Note that the Token endpoint will never return an authorization code since it is an input for the token endpoint when the authorization code grant is used.

Response type: id_token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization Yes No No
Token Not used Not used Not used

Note that the Token endpoint is not used in such cases.

Response type: code

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization No No Yes
Token Yes/No Yes No

The token endpoint will only return an ID token if scope openid is requested.

Note that the Token endpoint will never return an authorization code since it is an input for the token endpoint when the authorization code grant is used.

Response type: token

When using this response type, the endpoints will issue the following tokens:

 

Endpoint ID token Access token Authorization code
Authorization No Yes No
Token Not used Not used Not used

Note that the Token endpoint is not used in such cases.

This scenario basically maps to the OAuth2 Implicit Grant Flow.

OAuth 2.0 Grant Flows

Grant Types

A grant type in OAuth 2.0 refers to the way an application gets an access token. OAuth 2.0 defines several grant types:

Extensions can also define new grant types. Each grant type maps to a different use case. E.g. using a native application, a web application, single page applications, machine-to-machine applications…

Which flow should I use ?

Depending on which type of application you are developping and your ability to open a browser window or store client secrets securely, you will choose one of the flows above.

 

Application Type Grant Flow Type
Web Server Application Authorization Code Grant
Single Page Application Implicit Grant
Authorization Code Grant with public client
Backend Server Application Client Credentials Grant
Native Application Authorization Code Grant with PKCE
Authorization Code Grant with public client
Native Application with no Browser Window Resource Owner Password Credential Grant

Note that when you have a native application without the possibility to open a browser window your only solution is to use the password grant but this solution is still not ideal since this forces the user to enter his/her credentials in your application. So this requires some level of trust from the user.

Authorization Code Grant Flow

The Authorization Code grant type is used by web and mobile apps. It can only be used if the client application is able to open a web browser. It is considered more secure than the implicit grant flow because it doesn’t provide the access token directly in a callback URL parameter but provides a code which can then be exchanged with an access token from a web server or a native app.

OAuth 2.0 Authorization Code Grant Flow

You can see the authorization code flow in action on the OAuth playground. The corresponsing OpenID Connect flow (so involving an ID token) can also be checked on the OAuth playground.

This flow is mainly aimed at web application running on a server where the backend can act as a confidential client i.e. can keep both the client secret and the issues access token secure. So the client identity can be securely assessed and the access token is only shared between the autorization server, the backend of the web application and the resource server.

With single page application or other javascript heavy application or native application installed on a desktop computer or a mobile device, it is not possible to keep the client secret secure on the client side. And if such applications have no control on a server side component or this component is not appropriate for taking over the role of the client application in OAuth2 flows, they are considered public client.

When such a public client use the Authorization Code Grant, they cannot authenticate themselves as a client but can still authenticate the user. So the capabilities of the authorization server are somewhat limited since it cannot control which applications are allowed to get an access token in the name of the resource owner. Actually the authorization server cannot even make sure that the application exchanging the authorization code for an access token is actually the same application which got the authorization code. This is a much bigger issue since anyone getting hold of the authorization code could go to the authorization server and get an access token pretending to be the application which got the code.

To mitigate this risk, you can use a Proof Key for Code Exchange. This basically just means that when request the authentication code, you provide a secret called “code verifier” that you generate on the fly and use only in the scope of the authorization flow. The authorization server will associate the code verifier to the returned authorization code (it will not return it together with the code). When the application then wants to exchange the code for an access token, it also provides the code verifier which can be checked by the authorization server before returning an access token.

You can also see the authorization code flow with PKCE in action on the OAuth playground.

Implicit Grant Flow

The implicit grant flow is a flow where the authorization server directly returns an access token in a URL fragment. Unlike the Authorization Code Grant Flow it doesn’t the client application to exchange an authorization code for a token. Originally, it was the recommended “best practice” for all browser based apps. This flow is now mostly used in SPA (Single Page Applications – JavaScript application running in the browser). For other server based web application, you would rather use the Authorization Code Grant Flow.

OAuth 2.0 Implicit Grant Flow

You can see the implicit flow in action on the OAuth playground.

Password Credential Grant Flow

The password grant is probably the easiest (and least secure) grant type. In this flow, the application needs to know the user credentials and pass them to the authorization server.

Resource Owner Password Credentials Grant Flow

It is recommended to avoid this flow as much as possible. The main issue bing that the client needs to know you credentials. Imagine you want to allow the great benohead.com app to get your name from Facebook but you have to give it your password and trust the app will not start friending random strangers or posting inappropriate messages or just shut you out by changing your password.

 

R: libgfortran – Library not loaded

If you get the following error message after installing R on your mac:

$ R
dyld: Library not loaded: /usr/local/lib/gcc/5/libgfortran.3.dylib
  Referenced from: /usr/local/Cellar/r/3.2.0_1/R.framework/Versions/3.2/Resources/lib/libR.dylib
  Reason: image not found
Trace/BPT trap: 5

You’ll need to reinstall gcc with the –with-fortran option:

brew reinstall gcc5 --with-fortran

 

AngularJS and Kendo UI: Watchers for Grid and Tree List

When you use the Kendo UI Grid or Tree List widgets in an AngularJS application, you will probably notice that with long grids/lists or with many columns, you’ll end up having quite a few watchers created (basically one watcher per cell). Unfortunately, it is not (yet) possible to use one time binding. The AngularJS documentation recommends keeping the number of watches under 2000 in order not to hurt the application performance (because it otherwise creates a high load during digest cycles).

The reason why so many watches are created is that the Kendo UI directives compile (using $compile) all cells so that you can use angular expression and directives in you column templates.

Disclaimer: All the instructions below only make sense if you do not need two way binding in your grid or tree list rows. If you do then you actually need these watchers.

Currently, the only way to prevent this is to initialize the Kendo Grid (or Tree List) widgets in you controller instead of using the Kendo directives. i.e. replacing this:

<kendo-treelist
	id="treelist"
	k-options="treelistKendoOptions"
	k-scope-field="treelistScope"
	k-columns="vm.treeListColumns"
	k-auto-bind="false">
</kendo-treelist>

By a simple div:

<div id="treelist"></div>

And creating the tree list (or the grid) in you controller:

$("#treelist").kendoTreeList($scope.treelistKendoOptions);

Additionally, you’ll have to replace attributes you had in your HTML code when using the directive by option or additional code. In my case, I had to move k-auto-bind to the auto-bind property in the options:

$scope.treelistKendoOptions = {
	...
	autoBind: false,
	...
};

Another attribute we were using is k-scope-field. This attribute defines a scope variable to which the Grid or Tree List should be bound. You can then call methods of the widget in your controller. The same can also be achieved when instantiating the widget from your controller:

$("#treelist").kendoTreeList($scope.treelistKendoOptions);
$scope.treelistScope = $("#treelist").data("kendoTreeList");

Of course, if you use a Grid and not a Tree List, you’d use kendoGrid instead of kendoTreeList.

Once you’ve done this, you’ll see the number of watchers has greatly reduce. But you might also see that the contents of some columns are broken. This basically happens whenever you use AngularJS expression (e.g. using some methods on the scope) in you column template e.g.:

template: "<span>{{ versionFormat(dataItem.Version) }}</span>

Since we’re not in the Angular world anymore, the templates are not compiled anymore (that’s after all that’s what we wanted to prevent). So you’ll need to add the logic you had in the template to the method defined as data source. In my example above, I’d call versionFormat for every row and replace dataItem.Version by the output value.

 

IDX10638: Cannot created the SignatureProvider, ‘key.HasPrivateKey’ is false, cannot create signatures. Key: Microsoft.IdentityModel.Tokens.RsaSecurityKey.

After updating the Microsoft.IdentityModel.Tokens library we were getting the following error message when creating JWT tokens:

System.InvalidOperationException
HResult=0x80131509
Message=IDX10638: Cannot created the SignatureProvider, ‘key.HasPrivateKey’ is false, cannot create signatures. Key: Microsoft.IdentityModel.Tokens.RsaSecurityKey.
Source=Microsoft.IdentityModel.Tokens
StackTrace:
at Microsoft.IdentityModel.Tokens.AsymmetricSignatureProvider..ctor(SecurityKey key, String algorithm, Boolean willCreateSignatures)
at Microsoft.IdentityModel.Tokens.CryptoProviderFactory.CreateSignatureProvider(SecurityKey key, String algorithm, Boolean willCreateSignatures)
at Microsoft.IdentityModel.Tokens.CryptoProviderFactory.CreateForSigning(SecurityKey key, String algorithm)
at System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler.CreateEncodedSignature(String input, SigningCredentials signingCredentials)
at System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler.WriteToken(SecurityToken token)

The code where this happened looked like this and was working fine before the update:

var buffer = Convert.FromBase64String(Base64Cert);
var signingCertificate = new X509Certificate2(buffer, CertificatePassword);

var identity = new ClaimsIdentity(Claims);
var data = new AuthenticationTicket(identity, null);

if (signingCertificate.PrivateKey is RSACryptoServiceProvider rsaProvider)
{
	var key = new RsaSecurityKey(rsaProvider);
	var signingCredentials = new SigningCredentials(key, SecurityAlgorithms.RsaSha256Signature);

	var token = new JwtSecurityToken(
		issuer: TokenIssuer,
		audience: TokenAudience,
		claims: data.Identity.Claims,
		notBefore: DateTime.UtcNow,
		expires: DateTime.UtcNow.AddMinutes(TokenValidityInMinutes),
		signingCredentials: signingCredentials
	);

	var tokenString = new JwtSecurityTokenHandler().WriteToken(token);
	Console.WriteLine(tokenString);
}
else
{
	Console.Error.WriteLine("signingCertificate.PrivateKey is not an RSACryptoServiceProvider");
}

Debugging the code, I saw that signingCertificate.HasPrivateKey was true but key.HasPrivateKey was false

In order to solve it, two small changes were required:

  1. Add a keyStorageFlags parameter to the X509Certificate2 constructor so that the imported keys are marked as exportable
  2. Use the ExportParameters method to retrieve the raw RSA key in the form of an RSAParameters structure including private parameters.

So using the following code worked without exception:

var buffer = Convert.FromBase64String(Base64Cert);
var signingCertificate = new X509Certificate2(buffer, CertificatePassword, X509KeyStorageFlags.Exportable);

var identity = new ClaimsIdentity(Claims);
var data = new AuthenticationTicket(identity, null);

if (signingCertificate.PrivateKey is RSACryptoServiceProvider rsaProvider)
{
	var key = new RsaSecurityKey(rsaProvider.ExportParameters(true));
	var signingCredentials = new SigningCredentials(key, SecurityAlgorithms.RsaSha256Signature);

	var token = new JwtSecurityToken(
		issuer: TokenIssuer,
		audience: TokenAudience,
		claims: data.Identity.Claims,
		notBefore: DateTime.UtcNow,
		expires: DateTime.UtcNow.AddMinutes(TokenValidityInMinutes),
		signingCredentials: signingCredentials
	);

	var tokenString = new JwtSecurityTokenHandler().WriteToken(token);
	Console.WriteLine(tokenString);
}
else
{
	Console.Error.WriteLine("signingCertificate.PrivateKey is not an RSACryptoServiceProvider");
}

 

Is null a valid JSON text ?

Recently I was working on an issue where a service would return null when called with some parameters and the consuming service would have an issue, because the response read from the HTTP response stream would be a string containing the 4 characters “null” and only had a check whether the string was null and if not was using a JSON converter to parse the JSON text which would result in an exception being thrown. The question we were discussing here is what should be the JSON representation of a null object.

What is JSON ?

JavaScript Object Notation (JSON) is a lightweight, language independent text format for the serialization and exchange of structured data. It is derived from the object literals defined in the ECMAScript Programming Language Standard and defines a small set of formatting rules for the portable representation of structured data.

JSON is described in a few standard documents:

JSON text vs. JSON value

A JSON text is a sequence of tokens representing structured data transmitted as a string. A JSON value is be an object, array, number, or string, or one of the three literal names: false, null or true.

In RFC 4627, a JSON text was defined as serialized object or array. An JSON object value having to start and end with curly brackets and a JSON array value having to start and end with square brackets. This effectively meant that “null” was not a valid JSON text.

But even in RFC 4627, null was a valid JSON value.

Changes in RFC 7159

RFC 7159 was published in March 2014 and updates RFC 4627. The goal of this update was to remove inconsistencies with other specifications of JSON and highlight practices that can lead to interoperability problems. One of the changes in RFC 7159 is that a JSON text is not defined as being an object or an array anymore but rather as being a serialized value.

This means that with RFC 7159, “null” (as well as “true” and “false”) becomes a valid JSON text. So the JSON text serialized value of a null object is indeed “null”. Unfortunately, not all JSON parsers / deserializers support parsing the string “null”.

Parsing null with JSON.NET

When using JSON.Net (Newtonsoft.Json), there are two ways to deserialize a JSON text:

  • JObject.parse: this returns a JObject which allows you to work with JSON results which structure might not be completely known.
  • JsonConvert.DeserializeObject: this is always to deserialize the JSON text to an instance of a defined class.

JObject.parse unfortunately throws an exception when trying to parse “null” (JObject.Parse(“null”)):

Newtonsoft.Json.JsonReaderException: ‘Error reading JObject from JsonReader. Current JsonReader item is not an object: Null. Path ”, line 1, position 4.’

But if you do not have a class corresponding to the JSON text you’re deserializing, you can either use “object” as type when calling JsonConvert.DeserializeObject or use the overload without generics:

JsonConvert.DeserializeObject<object>("null");
JsonConvert.DeserializeObject("null");

In both cases, you will get an instance of JObject back (just like the return value of JObject.parse).

 

Export an App Service certificate to a pfx file with PowerShell

In order to debug a webjob running in an Azure App Service and accesses a service using a certificate, I needed to create a local copy of the certificate to be able to run the webjob on a local machine. The Azure portal unfortunately only provides these options:

  1. Import an existing App service certificate
  2. Upload a certificate
  3. Delete a certificate

So there is no option to download a certificate. But this can be used using PowerShell and the AzureRM module.

First, we’ll need to set a few variables with the data required to get the certificate:

$azureLoginEmailId = "me@benohead.com"
$subscriptionName = "mysubscriptionname"
$resourceGroupName = "myresourcegroupname"
$certificateName = "nameofthecertificateiwanttodownload"
$pfxPassword = "mygreatpassword"

We’ll later use $pfxPassword to set a password in the created PFX file.

Then I need to login into Azure to be able to access the resources:

Login-AzureRmAccount

You will see a popup where you can enter your credentials.

Then you need to select the appropriate subscription:

Select-AzureRmSubscription -SubscriptionName $subscriptionName

Now, we’re ready to access the certificate resource which can be used to get the actual certificate from the key vault:

$certificateResource = Get-AzureRmResource -ResourceName $certificateName -ResourceGroupName $resourceGroupName -ResourceType "Microsoft.Web/certificates" -ApiVersion "2015-08-01"

The returned resource object has next to the location, name, ID, type and resource group name also a Properties member which contains details about the certificate:

  • subjectName
  • hostNames
  • issuer
  • issueDate
  • expirationDate
  • thumbprint
  • keyVaultId
  • keyVaultSecretName
  • keyVaultSecretStatus
  • webSpace

All the pieces of information we need to retrieve the certificate from key vault are encoded in the keyVaultId except the keyVaultSecretName (which is also in the list above):

/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/xxxxx/providers/microsoft.keyvault/vaults/xxxxx

So by splitting it to an array using / as a separator, you get resource group name for the key vault in the 5th element from the end and the key vault name in the last element.

So you can extract them like this:

$keyVaultId = $certificateResource.Properties.keyVaultId
$keyVaultData = $keyVaultId.Split("/")
$keyVaultDataLength = $keyVaultData.Length
$keyVaultName = $keyVaultData[$keyVaultDataLength - 1]
$keyVaultResourceGroupName = $keyVaultData[$keyVaultDataLength - 5]

You also get the secret name like this:

$keyVaultSecretName = $certificateResource.Properties.keyVaultSecretName

Now we can grant our user permissions to perform a “get” operation on the key vault:

Set-AzureRmKeyVaultAccessPolicy -ResourceGroupName $keyVaultResourceGroupName -VaultName $keyVaultName -UserPrincipalName $azureLoginEmailId -PermissionsToSecrets get

And you can fetch the secret containing the certificate:

$keyVaultSecret = Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name $keyVaultSecretName

This secret contains a Base64 representation of the certificate which you can convert back to a certificate object using:

$pfxCertObject=New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList @([Convert]::FromBase64String($keyVaultSecret.SecretValueText),"", [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)

Now, the only step left is to write the certificate to a local PFX file:

[io.file]::WriteAllBytes(".\appservicecertificate.pfx", $pfxCertObject.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $pfxPassword))

And there you are !

If you have found this page, you probably have also seen a couple of MSDN blog entries about a similar topic:

That’s also where I started but since I had to adapt the code from these two blog entries, I decided to document it in this blog post. Hope this helps !

 

Cross domain and cross browser web workers

What are web workers?

A web worker is a script that runs in the background in an isolated way. They run in a separate thread and can perform tasks without interfering with the user interface. Since the scripts on a page are executed in a single thread of execution, a long running script will make the page unresponsive. Web workers allow to hide this from the user and let the browser continue with normal operation while the script is running in the background.

Limitations of web workers

Web workers are great because they can perform computationally expensive tasks without interrupting the user interface. But they also bring quite a few limitations:

  1. Web workers do not have access to the DOM
  2. They do not have access to the document object
  3. These workers do not have access to the window object
  4. Web workers do not have direct access to the parent page.
  5. They will not work if the web page is being served a file:// URL
  6. You are limited by the same origin policy i.e. the worker script must be served from the same domain (including the protocol) as the script that is creating the worker

The first four limitation mean that you cannot move all your Javascript logic to webworkers. The fifth one means that even when developing, you will need to serve your page through a web server (which can be on the localhost).

The purpose of this article is to see how to work around the last limitation (same origin policy). But first let’s briefly see how to use a worker.

How to use web workers

Creating a worker is first of all pretty straight forward:

//Creating the worker
var worker = new Worker(workerUrl);

//Registering a callback to process messages from the worker
worker.onmessage = function(event) { ... });

//Sending a message to the worker
worker.postMessage("Hey there!");

//Terminating the worker
worker.terminate();
worker = undefined;

In the worker things then work in a similar way:

//Registering a callback to process messages from the parent
self.onmessage = function (event) { ... });

//Sending a message to the parent
self.postMessage("Hey there!");

Now the problem I had was that I needed to run a worker provided by a partner and therefore served from a different domain. So new Worker(...); will fail with an error similar to this:
Uncaught SecurityError: Failed to construct 'Worker': Script at 'xxx' cannot be accessed from origin 'xxx'

Cross domain workers

So the browser will not allow you to create a worker with a URL pointing to a different domain. But it will allow you to create a blob URL which can be used to initialize your worker.

Blob URLs

A blob is in general something which doesn’t necessarily in JavaScript “format” but it can be. You can then have the browser internally generate a URL. This URL uses a pseudo protocol called “blob”. So you get a URL in this form: blob:origin/UID. The origin is the origin of the page where you create the blob URL and the UID is a generated unique ID e.g. blob:https://mydomain/8126d58c-edbc-ee14-94a6-108b8f215304.

A blob can be created this way:

var blob = new Blob(["some JavaScript code;"], { "type": 'application/javascript' });

The following browser versions seem to support the blob constructor: IE 10, Edge 12, Firefox 13, Chrome 20, Safari 6, Opera 12.1, iOS Safari 6.1, Android browser/Chrome 53. So if you want to support an older version you will need to revert to the BlobBuilder interface has been deprecated in favor of the newly introduced Blob constructor in the newer browsers:

var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
blobBuilder.append("some JavaScript code;");
var blob = blobBuilder.getBlob('application/javascript');

In order to support old and new browsers, you will want to try using the Blob constructor and revert to the BlobBuilder in case you get an exception:

var blob;
try {
	blob = new Blob(["some JavaScript code;"], { "type": 'application/javascript' });
} catch (e) {
	var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
	blobBuilder.append("some JavaScript code;");
	blob = blobBuilder.getBlob('application/javascript');
}

You can then generate a URL object from the blob:

var url = window.URL || window.webkitURL;
var blobUrl = url.createObjectURL(blob);

Finally, you can create your web worker using this URL object:

var worker = new Worker(blobUrl);

Now, the piece of JavaScript you want to have in your blob would be this one liner which will load the worker file:

importScripts('https://mydomain.com/worker.js');

So a method to load, create and return the worker both in case we are in a same-domain scenario or in a cross-domain scenario would look like this:

function createWorker (workerUrl) {
	var worker = null;
	try {
		worker = new Worker(workerUrl);
	} catch (e) {
		try {
			var blob;
			try {
				blob = new Blob(["importScripts('" + workerUrl + "');"], { "type": 'application/javascript' });
			} catch (e1) {
				var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
				blobBuilder.append("importScripts('" + workerUrl + "');");
				blob = blobBuilder.getBlob('application/javascript');
			}
			var url = window.URL || window.webkitURL;
			var blobUrl = url.createObjectURL(blob);
			worker = new Worker(blobUrl);
		} catch (e2) {
			//if it still fails, there is nothing much we can do
		}
	}
	return worker;
}

Cross-browser support

Unfortunately, we still have another problem to handle: in some browser, the failed creation of a web worker will not result in an exception but with an unusable worker in the cross-domain scenario. In this case, an error is raised as an event on the worker. So you would need to consider this also as a feedback that the creation of the worker failed and that the fallback with the blob URL should be used.

In order to do this, you should probably first extract the fallback into its own function:

function createWorkerFallback (workerUrl) {
	var worker = null;
	try {
		var blob;
		try {
			blob = new Blob(["importScripts('" + workerUrl + "');"], { "type": 'application/javascript' });
		} catch (e) {
			var blobBuilder = new (window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder)();
			blobBuilder.append("importScripts('" + workerUrl + "');");
			blob = blobBuilder.getBlob('application/javascript');
		}
		var url = window.URL || window.webkitURL;
		var blobUrl = url.createObjectURL(blob);
		worker = new Worker(blobUrl);
	} catch (e1) {
		//if it still fails, there is nothing much we can do
	}
	return worker;
}

Now we can implement the logic to handle the different cases:

var worker = null;
try {
	worker = new Worker(workerUrl);
	worker.onerror = function (event) {
		event.preventDefault();
		worker = createWorkerFallback(workerUrl);
	};
} catch (e) {
	worker = createWorkerFallback(workerUrl);
}

Of course, you could save yourself this try/catch/onerror logic and just directly use the fallback which should also work in all browsers.

Another option I’ve been using is still trying the get the worker to get initialized with this logic but only in case of same domain scenarios.

In order to do this, you’d need to first implement a check whether we are in a same-domain or a cross-domain scenario e.g.:

function testSameOrigin (url) {
	var loc = window.location;
	var a = document.createElement('a');
	a.href = url;
	return a.hostname === loc.hostname && a.port === loc.port && a.protocol === loc.protocol;
}

It just creates an anchor tag (which will not be bound to the dom), set the URL and then checking the different part of the URL relevant for identifying the origin (protocol, hostname and port).

With this function, you can then update the logic in this way:

var worker = null;
try {
	if (testSameOrigin(workerUrl)) {
		worker = new Worker(workerUrl);
		worker.onerror = function (event) {
			event.preventDefault();
			worker = createWorkerFallback(workerUrl);
		};
	} else {
		worker = createWorkerFallback(workerUrl);
	}
} catch (e) {
	worker = createWorkerFallback(workerUrl);
}

This may all sounds overly complex just to end up using a web worker but unfortunately because of cross-domain restrictions and implementation inconsistencies between browser, you very often need to have such things in your code.

 

 

AngularJS: Sharing data between controllers

Even though you should try to keep things decoupled and your directives, controllers and services should rather be self contained, you sometimes do need to share data between controllers. There are basically two main scenarios:

  1. Sharing data between a parent and a child controller
  2. Sharing data between two mostly unrelated controllers e.g. two siblings

Sharing data between a parent and a child controller

Let’s assume you have two controllers, one controller being the parent controller and a child controller:

<div ng-controller="ParentCtrl">
  <div ng-controller="ChildCtrl as vm">
  </div>
</div>

Where the controllers are defined as:

var app = angular.module('sharing', []);

app.controller('ParentCtrl', function($scope) {
});

app.controller('ChildCtrl', function($scope) {
});

Let’s now define a user name in the parent controller:

app.controller('ParentCtrl', function($scope) {
  $scope.user = {
    name: "Henri"
  };
});

Note that you shouldn’t define your variable as a primitive in the scope since this will cause shadow property to appear in the child scope hiding the original property on the parent scope (caused by the JavaScript prototype inheritance). So you should define an object in the parent scope and defined the shared variable as a property of this object.

There are now three ways to access this shared variable:

  1. using $parent in HTML code
  2. using $parent in child controller
  3. using controller inheritance

Using $parent in HTML code

You can directly access variables in the parent scope by using $parent:

Hello {{$parent.user.name}}

Using $parent in child controller

Of course, you could also reference the shared variable using $parent in the controller and expose it in the scope of the child controller:

app.controller('ChildCtrl', function($scope) {
  $scope.parentUser = $scope.$parent.user;
});

Then, you can use this scope variable in your HTML code:

Hello {{parentUser.name}}

Using controller inheritance

But since the scope of a child controller inherits from the scope of the parent controller, the easiest way to access the shared variable is actually not to do anything. All you need to do is:

Hello {{user.name}}

Sharing data between two mostly unrelated controllers e.g. two siblings

If you need to share data between two controllers which do not have a parent-child relationship, you can neither use $parent nor rely on prototype inheritance. In order to still be able to share data between such controllers, you have three possibilities:

  1. Holding the shared data in a factory or service
  2. Holding the shared data in the root scope
  3. Using events to notify other controller about changes to the data

Holding the shared data in a factory or service

AngularJS factories (and services) can contain both methods (business logic) and properties (data) and can be injected in other components (e.g. your controllers). This allows you to define a shared variable in a factory, inject it in both controllers and thus bind scope variables in both controllers to this factory data.

The first step is to define a factory holding a shared value:

app.factory('Holder', function() {
  return {
    value: 0
  };
});

Then you inject this factory in your two controllers:

app.controller('ChildCtrl', function($scope, Holder) {
  $scope.Holder = Holder;
  $scope.increment = function() {
    $scope.Holder.value++;
  };
});

app.controller('ChildCtrl2', function($scope, Holder) {
  $scope.Holder = Holder;
  $scope.increment = function() {
    $scope.Holder.value++;
  };
});

In both controllers, we bind the Holder factory to a scope variable and define a function which can be called from the UI and updates the vale of the shared variable:

<div>
  <h2>First controller</h2>
  <button>+</button>{{Holder.value}}
</div>
<div>
  <h2>Second controller</h2>
  <button>+</button>{{Holder.value}}
</div>

No matter which “+” button you press, both values will be incremented (or rather the shared value will be incremented and reflected in both scopes).

Holding the shared data in the root scope

Of course, instead of using a factory or a service, you can also directly hold the shared data in the root scope and reference it from any controller. Although this actually works fine, it has a few disadvantages:

  1. Whatever is present in the root scope is inherited by all scopes
  2. You need to use some naming conventions to prevent multiple modules or libraries from overwriting each other’s data

In general, it’s much cleaner to encapsulate the shared data in dedicated factories or services which are injected in the components which require access to this share data than making these data global variables in the root scope.

Using events to notify other controller about changes to the data

In case, you do not want to bind both scopes through factory data (e.g. because you only want to propagate changes from one scope to another one on some condition), you can also rely on event notifications between the controllers to sync the data. There are three functions provided by AngularJS to handle events:

  • $emit is used to trigger an event and propagate it to the current scope and recursively to all parent scopes
  • $broadcast is used to trigger an event and propagate it to the current scope and recursively to all child scopes
  • $on is used to listen to event notification on the scope
Using $emit

Since $emit is propagating events up in the scope hierarchy, there are two use cases for it:

  • Propagating events to parent controllers
  • Efficiently propagating events to unrelated controllers through the root scope

In the first scenario, you emit an event on the child controller scope:

$scope.$emit("namechanged", $scope.name);

And listen to this event on the parent controller scope:

$scope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

In the second scenario, you emit an event on the root scope:

$rootScope.$emit("namechanged", $scope.name);

And listen to this event on the root scope as well:

$rootScope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

In this case there is effectively no further propagation of the event since the root scope has no parent scope. It is thus the preferred way to propagate events to unrelated scopes (and should be preferred to $broadcast in such scenarios).

There is one thing you need to consider when registering to events on the root scope: in order to avoid leaks when controllers are created and destroyed multiple times, you need to unregister the event listeners. A function to unregistered is returned by $emit. You just need to register this function as a handler for the $destroy event in your controller, replacing the code above by:

var destroyHandler = $rootScope.$on("namechanged", function(event, name) {
  $scope.name = name;
});

$scope.$on('$destroy', destroyHandler);
Using $broadcast

Theoretically, you could also use $broadcast to cover two scenarios:

  • Propagating events to child controllers
  • Propagating events to unrelated controllers through the root scope

Effectively, the second use case doesn’t make much sense since you would basically trigger an event on the root scope and propagate it to all child scopes which is much less efficient than propagating and listening to events on the root scope only.

In the first scenario, you broadcast an event on the parent controller scope:

$scope.$broadcast("namechanged", $scope.name);

And listen to this event on the child controller scope:

$scope.$on("namechanged", function(event, name) {
  $scope.name = name;
});
Similarities and differences between $emit and $broadcast

Both $emit and $broadcast dispatch an event through the scope hierarchy notifying the registered listeners. In both cases, the event life cycle starts at the scope on which the function was called. Both functions will pass all exceptions thrown by the listeners to $exceptionHandler.
An obvious difference is that $emit propagates upwards and $broadcast downwards (in the scope hierarchy). Another difference is that when you use $emit, the event will stop propagating if one of the listeners cancels it while the event cannot be canceled when propagated with $broadcast.

Demo

You can see the code from this post in action on plunker:

When to use $timeout

$timeout is the angular equivalent to settimeout in JavaScript. It is basically a wrapper aroung window.settimeout. So the basic functionality provided by $timeout is to have a piece of code executed asynchronously. As JavaScript doesn’t support thread spawning, asynchronously here means that the execution of the function is delayed.

Differences to settimeout

There are basically 2 differences between using $timeout and using directly settimeout:

  1. $timeout will by default execute a $apply on the root scope after the function is executed. This will cause a model dirty checking to be run.
  2. $timeout wraps the call of the provided function in a try/catch block and forwards any thrown exception to the global exception handler (the $exceptionHandler service).

Parameters

All parameters of $timeout are optional (although it of course doesn’t make much sense to call it without any parameters).

The first parameter is a function which execution is to be delayed.

The second parameter is a delay in milliseconds. You should now rely though on the delay to be absolutely respected. The minimum delay should be of 4 milliseconds in all modern browsers (you can set a smaller value but will probably not see a difference).

The third parameter is a flag (i.e. boolean, true or false) which when set to true, will cause $timeout not to execute $apply once the function is executed.

All additional parameters will be handled as parameters for the provided function and will be passed to the called function.

Delayed function execution

So two of the scenarios when you would want to use $timeout (or settimeout in this case) is:

either when you want to execute a function later on:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later
};

$timeout(executeInTenSeconds, 10000);

or when you want to execute a function when the execution of the current block is finished:

var executeLater = function() {
    //Code executed once we're done with the current execution
};

$timeout(executeInTenSeconds);

Additional parameters

$timeout passes all parameters after the third one to the function being called. You can thus pass parameters to the function like this:

var executeInTenSeconds = function(increment) {
    $scope.myValue += increment;
};

$timeout(executeInTenSeconds, 10000, true, 10);

This will basically execute executeInTenSeconds(10); after 10 seconds, trigger the global exception manager in case of an unhandled exception and run a digest cycle afterwards.

Model dirty checking

A scenario where you’d rather use $timeout than settimeout is when you are modifying the model and need a digest cycle (dirty check) to run after the provided function is executed e.g.:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later
	$scope.myScopeVar = "hello";
};

$timeout(executeInTenSeconds, 10000);

After 10 seconds our function will be called, it will change the value of a scope variable and after that a digest cycle will be triggered which will update the UI.

But there are cases when you actually do not need a model dirty checking (e.g. when you call the server but do not need to reflect the results of this call in you application). In such cases, you should use the third parameter of $timeout (invokeApply) e.g.:

var executeInTenSeconds = function() {
    //Code executed 10 seconds later and which doesn't require a digest cycle	
};

$timeout(executeInTenSeconds, 10000, false);

When the third parameter of $timeout is set to false, $timeout will skip the $rootScope.$apply() which is usually executed after the provided function.

Global exception handler

Since $timeout also wraps the provided function in a try/catch block and forwards all unhandled exceptions to the global exception handler, you may also use it in other cases too. I personally feel this is just a solution for lazy developers as you should rather wrap your function code in a try/catch block e.g.:

function($exceptionHandler) {
	try {
	  // Put here your function code
	} catch (e) {
	  // Put here some additional error handling call
	  $exceptionHandler(e); // Trigger the global exception handler
	}
	finally {
	  // Put here some cleanup logic
	}
}

And instead of duplicating this code everywhere you should probably consider writing your own reusable provider e.g.:

'use strict';

function $ExceptionWrapperProvider() {
  this.$get = ['$exceptionHandler', function($exceptionHandler) {

    function exceptionWrapper(fn) {
        try {
			return fn.apply(null, arguments);
        } catch (e) {
          $exceptionHandler(e);
        }
		return null;
    }

    return exceptionWrapper;
  }];
}

$timeout without function

All parameters of $timeout are optional, even the function. So why would you need to call $timeout without a function ? Basically, if you call $timeout without a function, there is no function execution, thus no exceptions and all that remains is the delay and the digest cycle. So calling $timeout without a function e.g.:

$timeout(10000);

Basically just triggers a digest cycle after the provided delay. In the example above, it would cause a digest cycle to be run after 10 seconds. But of course doing this is probably a sign that something’s wrong in your application (you should need to run delayed digest cycles but should rather run them after some logic has been executed). But if something is run asynchronously outside of your code but you do not get called back when it’s done and this external code somehow modifies something in the angular scopes, you might need to handle it this way. In this case, using $timeout is not really different from using:

settimeout( function() {
	$rootScope.$apply();
}, 10000);

It’s just less lines of code…

Failed to initialize the PowerShell host while installing packages

While trying to install the NEST Nuget package, I got the following error when the JSON.NET post-install powershell script was executed:

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first.

I then tried to update the execution policy by executing the following in a PowerShell opened as Administrator:

start-job { Set-ExecutionPolicy Unrestricted } -RunAs32 | wait-job | Receive-Job

Unfortunately, this failed with the following error message:

Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a
more specific scope. Due to the override, your shell will retain its current effective execution policy of
RemoteSigned. Type “Get-ExecutionPolicy -List” to view your execution policy settings. For more information please see
“Get-Help Set-ExecutionPolicy”.
+ CategoryInfo : PermissionDenied: (:) [Set-ExecutionPolicy], SecurityException
+ FullyQualifiedErrorId : ExecutionPolicyOverride,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand

I then tried to set it to RemoteSigned instead of Unrestricted:

start-job { Set-ExecutionPolicy RemoteSigned } -RunAs32 | wait-job | Receive-Jobb

This didn’t cause any error but even after restarting Visual Studio I was not able to install JSON.NET.

The only thing that worked was reinstalling the NuGet Package Manager for Visual Studio:

  1. In Tools -> Extensions and Updates, uninstall NuGet Package Manager for Visual Studio
  2. Restart Visual Studio
  3. In Tools -> Extensions and Updates, install NuGet Package Manager for Visual Studio
  4. Restart Visual Studio

And after I could install NEST including JSON.NET!