C#: Understanding CLOSE_WAIT and FIN_WAIT_2

Here’s a short C# program which can be used to better understand what the TCP states CLOSE_WAIT and FIN_WAIT_2 are and why you sometimes see connections stuck in these states:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.NetworkInformation;
using System.Net.Sockets;

namespace TcpTester
{
    internal static class Program
    {
        private const int Port = 15000;
        private const string Hostname = "127.0.0.1";

        private static void Main(string[] args)
        {
            if (args.Length > 0 && args[0] == "client")
            {
                // Started in client mode
                var tcpClient = new TcpClient();
                tcpClient.Connect(Hostname, Port);
                Console.WriteLine("Connected to {0}:{1}", Hostname, Port);
                PrintState();
                Console.WriteLine("Press any key to close the connection from this side.");
                Console.ReadKey();
                tcpClient.Close();
                PrintState();
            }
            else
            {
                // Started in server mode
                var tcpListener = new TcpListener(IPAddress.Parse(Hostname), Port);
                tcpListener.Start();
                Console.WriteLine("Listening on {0}:{1}", Hostname, Port);
                TcpClient tcpClient = tcpListener.AcceptTcpClient();
                tcpListener.Stop();
                Console.WriteLine("Client connected on {0}:{1}", Hostname, Port);
                PrintState();
                Console.WriteLine("Press any key to close the connection from this side.");
                Console.ReadKey();
                tcpClient.Close();
                PrintState();
            }
        }

        private static void PrintState()
        {
            IEnumerable<TcpConnectionInformation> activeTcpConnections =
                IPGlobalProperties.GetIPGlobalProperties().GetActiveTcpConnections()
                    .Where(c => c.LocalEndPoint.Port == Port || c.RemoteEndPoint.Port == Port);
            foreach (TcpConnectionInformation connection in activeTcpConnections)
            {
                Console.WriteLine("{0} {1} {2}", connection.LocalEndPoint, connection.RemoteEndPoint, connection.State);
            }
        }
    }
}

You can start the program without parameters to start a server and with the parameter “client” to start a client (guess it was kind of obvious…).

The server listens to 127.0.01:15000 and the clients connects to it. First start the server. The following will be written to the console:

Listening on 127.0.0.1:15000

Then start the client in another window. The following will appear in the client window:

Connected to 127.0.0.1:15000
127.0.0.1:15000 127.0.0.1:57663 Established
127.0.0.1:57663 127.0.0.1:15000 Established

This tells you that the client is connected from port 57663 (this will change every time you run this test) to port 15000 (where the server is listening).

In the server window, you will see that it got a client connection and the same information regarding port and connection states.

Then press any key on the server console and the following will be displayed:

127.0.0.1:15000 127.0.0.1:57663 FinWait2
127.0.0.1:57663 127.0.0.1:15000 CloseWait

So once the server closed the connection, the connection on the server side went to FIN_WAIT_2 and the one on the client side went to CLOSE_WAIT.

Then press any key in the client console to get the following displayed:

127.0.0.1:15000 127.0.0.1:57663 TimeWait

The connection will stay in TIME_WAIT state for some time. If you really wait a long time before pressing a key in the client console, this last line will not be displayed at all.

So, this should make it easier to understand what the TCP states CLOSE_WAIT and FIN_WAIT_2 are: When the connection has been closed locally but not yet remotely, the local connection is the state FIN_WAIT and the remote one in CLOSE_WAIT.

For more details about the different TCP states, please refer to TCP: About FIN_WAIT_2, TIME_WAIT and CLOSE_WAIT.

OWIN: Serving static files from an external directory

I am working on an application with a self-hosted OWIN server where the UI is running in an embedded browser and the backend part of the application is implemented using WebApi. When I generate files in the backend, I store them in a subfolder of the application (called “uploads”) configure my application so that files from this folder are served statically:

appBuilder.UseStaticFiles("/uploads");

It all worked fine until an installer was created for the application which installed it in c:\Program Files. Unfortunately, the application is not able to write to the uploads subfolder, so it broke this type of functionality. Obviously, the solution is to be a good Windows citizen and store files created by the application in the LocalAppData directory e.g. instead of using:

Path.Combine(AppDomain.CurrentDomain.SetupInformation.ApplicationBase, "uploads")

use:

Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), @"MyCompany\uploads")

This solves the issue writing to the folder. All that is now missing is to tell OWIN to serve files from this folder whether the “/uploads” virtual path is accessed:

var staticFilesOptions = new StaticFileOptions();
staticFilesOptions.RequestPath = new PathString("/uploads");
var uploadPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), @"MyCompany\uploads");
Directory.CreateDirectory(uploadPath);
staticFilesOptions.FileSystem = new PhysicalFileSystem(uploadPath);
appBuilder.UseStaticFiles(staticFilesOptions);

Note that the folder needs to exist before you use UseStaticFiles hence the Directory.CreateDirectory call.

C#: WPF / Console Hybrid Application

I am working on a WPF application which needs to also provide a command line interface running as a console application. This means that depending on how it is started, it should either work as a console application or start with a GUI.

In my case I already had a WPF application by the time this requirement came in. So I had to adapt my existing application to make it an hybrid application.

First I created a new class Program with a Main method:

using System;

namespace HybridApp
{
    public class Program
    {
        public static void Main(string[] args)
        {
        }
    }
}

You might get an error saying that there are two Main methods, so you will have to select this new class as Startup object in the project properties.

Then you need to set the output type to “Console Application” in the project properties:

Output Type Console Application

This is required so that your application doesn’t automatically start with a GUI.

Now we’ll need either keep running as a Console application or start the WPF application depending on how the application was started.

First in order to be able to start the WPF application, you’ll need to add the STAThread attribute to your main method:

using System;

namespace HybridApp
{
    public class Program
    {
        [STAThread]
        public static void Main(string[] args)
        {
        }
    }
}

We’ll now assume that in order to start the UI, we’ll call the application with the -g argument. In order to start the GUI, all you need is to call the Main method of your WPF application:

using System;

namespace HybridApp
{
    public class Program
    {
        [STAThread]
        public static void Main(string[] args)
        {
			if (args.Length > 0 && args[0] == "-g") {
				// GUI mode
				App.Main();
			}
			else {
				// console mode
			}
        }
    }
}

Now, you’ll notice that when starting in GUI mode, you will still have a Console window displayed. This doesn’t look good. I did not find a way to completely get rid of it but at least managed to hide it before starting the UI so that you only see the console window for a very short time. In order to do this, you need to use GetConsoleWindow from kernel32.dll in order to get the handle of the console window and ShowWindow from user32.dll in order to hide it:

using System;
using System.Runtime.InteropServices;

namespace HybridApp
{
    public class Program
    {
        [DllImport("kernel32.dll")]
        private static extern IntPtr GetConsoleWindow();

        [DllImport("user32.dll")]
        private static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);

        [STAThread]
        public static void Main(string[] args)
        {
			if (args.Length > 0 && args[0] == "-g") {
				// GUI mode
				ShowWindow(GetConsoleWindow(), 0 /*SW_HIDE*/);
				App.Main();
			}
			else {
				// console mode
			}
        }
    }
}

Now all you need is to implement the logic for the console mode and you’re done !

 

Update: The problem with this solution is the console shortly flickering on the screen before it is hidden (when you start the GUI mode). I also tried another solution, making my application a Windows application and using AttachConsole to connect to the parent console. This kind of works. Your GUI will start without console being shortly displayed. And the output of your command line code will be displayed on the console. The problem is that as a Windows application, your application will fork on startup and even though it still writes to the console, it will not control it. So control will be immediately returned to the calling application/user. This is of course not what you’d expect from a command line application. The only way to retain control of the console is to make your application a Console application.

Update: Another solution is to implement the way Microsoft build devenv (Visual Studio) or the way the Windows Script Host does it. There are two executables devenv.com and devenv.exe (or wscript.exe and cscript.exe). One is used from the command line and the other to start the GUI. If you need to start one and then switch to another mode, you can start the second one and exit. For devenv, it also relies on the fact that com files get picked before exe files when you call it from the command line without the file extension. But in my case I just didn’t want to produce two executables so I decided to live with the short console flickering.

Orchard CMS: Exporting and importing linked content items

I am working on a software based on Orchard CMS. My module defines multiple content item types which reference each other:

content items

A DataSource defines the location of data. A DataQuery defines a query taking the referenced data as input and returning a collection of objects with properties. A DataGrid defines how the properties of the objects returned by the DataQuery are displayed in an HTML table.

Now let’s have a look at how the references between content items are handled. As an example, we’ll see how the DataQueries are referencing the DataSources.

First, the ContentPartRecord and ContentPart definition of DataQuery looks like this:

public class DataQueryPartRecord : ContentPartRecord
{
	public virtual ContentItemRecord DataSource { get; set; }
	...
}

public class DataQueryPart : ContentPart<DataQueryPartRecord>
{
	private readonly LazyField<IContent> _dataSource = new LazyField<IContent>();

	public LazyField<IContent> DataSourceField
	{
		get { return _dataSource; }
	}

	public IContent DataSource
	{
		get { return _dataSource.Value; }
		set { _dataSource.Value = value; }
	}

	...
}

Of course my parts have other fields but they are not relevant here.

The handler implements the lazy loading and record updating:

public class DataQueryHandler : ContentHandler
{
	private readonly IContentManager _contentManager;

	public DataQueryHandler(IRepository<DataQueryPartRecord> repository, IContentManager contentManager)
	{
		Filters.Add(StorageFilter.For(repository));
		_contentManager = contentManager;

		OnInitializing<DataQueryPart>(PropertySetHandlers);
		OnLoaded<DataQueryPart>(LazyLoadHandlers);
	}

	private void LazyLoadHandlers(LoadContentContext context, DataQueryPart part)
	{
		// add handlers that will load content just-in-time
		part.DataSourceField.Loader(
			() =>
				part.Record.DataSource == null
					? null
					: _contentManager.Get(part.Record.DataSource.Id, VersionOptions.AllVersions));
	}

	private static void PropertySetHandlers(InitializingContentContext context, DataQueryPart part)
	{
		// add handlers that will update records when part properties are set
		part.DataSourceField.Setter(query =>
		{
			part.Record.DataSource = query == null ? null : query.ContentItem.Record;
			return query;
		});

		// Force call to setter if we had already set a value
		if (part.DataSourceField.Value != null)
			part.DataSourceField.Value = part.DataSourceField.Value;
	}
}

The solution to export and import content items referencing each other would have been easier to describe without the lazy loading but I do have it in my code and I didn’t have the time and motivation to refactor it just for the purpose of this post. But this is not the important part so do not let the lazy loading confuse you.

On the database side, the Migrations.cs file defines the following:

// Creating table DataQueryPartRecord
SchemaBuilder.CreateTable("DataQueryPartRecord",
	table =>
		table.ContentPartRecord()
			.Column<int>("DataSource_Id");

ContentDefinitionManager.AlterPartDefinition("DataQueryPart",
	part =>
		part.Attachable()
			.WithDescription(
				"A very nice description..."));

ContentDefinitionManager.AlterTypeDefinition("DataQuery",
	cfg =>
		cfg.WithPart("CommonPart")
			.WithPart("RoutePart")
			.WithPart("DataQueryPart")
			.WithPart("LocalizationPart")
			.WithPart("IdentityPart")
			.Creatable()
			.Draftable()
			.Indexed());

Of course I actually have more columns in my table but they are not relevant here. What’s important is that the table for my ContentPartRecord contains an int column called “DataSource_Id” and referencing the ID of the DataSource record and that my content type has an IdentityPart. The IdentityPart is required anyway in order to be able to properly import the exported data and especially make sure that you update the appropriate content item if it already exists when you import the data.

Now as described in a previous post you need to implement the Importing and Exporting methods in your driver. My first implementation was exporting the referenced DataSource ID as part of the data for the DataQuery:

context.Element(part.PartDefinition.Name).SetAttributeValue("DataSource", part.DataSource.Id);

And on import, I was fetching the DataSource from the ID:

int dataSourceId = int.Parse(context.Attribute(part.PartDefinition.Name, "DataSource"));
part.DataSource = _contentManager.Get(dataSourceId, VersionOptions.AllVersions);

Now the problem is that when I create new content items they get a incrementing ID and when I delete some of them, there is gap in the IDs. Also on the other server where I will import the data, the ID of a content item on the original server might be already in use for a completely different content item. So the content item ID will always be reassigned. This means that when you import the DataSource it might get a different ID. When the DataQuery is imported, the exported ID might now point nowhere or to a completely different content item.

So what we need is some kind of ID which is maintained during export and import. Well, that’s exactly what the IdentityPart is for. This is what Orchard uses to identify whether a content item already exists and needs to be inserted or updated. The IdentityPart contains a unique Identifier which can be exported and then used during import to look up the appropriate content item:

protected override void Importing(DataQueryPart part, ImportContentContext context)
{
	...
	
	string dataSourceIdentifier = context.Attribute(part.PartDefinition.Name, "DataSourceIdentifier");
	part.DataSource = _contentManager
                .Query<IdentityPart, IdentityPartRecord>(VersionOptions.Latest)
                .Where(p => p.Identifier == dataSourceIdentifier)
                .List<ContentItem>().FirstOrDefault();
}

protected override void Exporting(DataQueryPart part, ExportContentContext context)
{
	...
	
	context.Element(part.PartDefinition.Name).SetAttributeValue("DataSourceIdentifier", part.DataSource.As<IdentityPart>().Identifier);
}

So now we got rid of the problem with changing IDs. But there is still a remaining issue. The export file contains all exported content item sorted by content type alphabetically. So the DataQueries come before the DataSources. In case the DataQuery being imported references a DataSource which does already exist, resolving the Identifier to a content item will work. But otherwise it will fail and return null.

Luckily, there is an easy workaround: when the resolved content item is null (i.e. it’s not been imported yet) we can create a dummy content item of the appropriate type, with the right Identifier and no other data. This will solve the referencing problem. And once the DataSources are processed, Orchard will update the dummy content item to contain the actual data for this content item. This only requires a small change in the Importing method:

protected override void Importing(DataQueryPart part, ImportContentContext context)
{
	...
	
	string dataSourceIdentifier = context.Attribute(part.PartDefinition.Name, "DataSourceIdentifier");
	part.DataSource = _contentManager
                .Query<IdentityPart, IdentityPartRecord>(VersionOptions.Latest)
                .Where(p => p.Identifier == dataSourceIdentifier)
                .List<ContentItem>().FirstOrDefault();
	if (part.DataSource == null)
	{
		var item = _contentManager.New("DataSource");
		var dataSourcePart = item.As<IdentityPart>();
		dataSourcePart.Identifier = dataSourceIdentifier;
		_contentManager.Create(item);
		part.DataSource = item;
	}
}

Using this we can import content items in any order and have placeholders created if the items are imported out of order and also make sure that references are kept through the export/import process.

This all may seem quite complex but once you’ve implemented it for one content type e.g. DataQuery, all you need is some copy&paste and search&replace for the other content types e.g. DataGrid.

Also, if for some reason you need to store a reference to a content item in a text form, it’s important to make sure that all referenced content items have an IdentityPart and that you reference the Identifier of this part instead of the ID of the content item. Getting the Identifier of a content item which has an IdentityPart is very easy:

var item = ... // some content item
var identityPart = item.As<IdentityPart>();
var identifier = identityPart.Identifier;

 

Analyze your code using NDepend – Part 1

Whether you have to get on-board on an existing .NET project or you want to get an overview of your own project and use a more holistic approach to code improvement, you will quickly reach the limit of the tools provided by default in Visual Studio. Fortunately, Visual Studio (like most major development environment) does provide a good way to extend it and over time quite a few third-party tool vendors have started providing additional functionality for your favorite IDE.

One such tool is NDepend. The goal of NDepend is to first provide you with means to browse and understand your code using better a better visualization than what’s available by default in Visual Studio. But NDepend allows you to go further than that. Let’s have a look at how you can improve your code and manage code improvement using this tool.

Disclaimer: I was asked by the creator of NDepend whether I’d be interested in test driving it and sharing my experience on my blog. But the reason why I accepted writing this article is that NDepend has helped me improve the code quality in one of my project and I feel this is a tool with a lot of potential. I do not work for NDepend neither do I get any financial compensation from them (except for a free license for the tool).

Download and Installation

You can download a trial version of NDepend on their web site. The installation process is pretty straightforward. You download a ZIP file; extract it to some directory and start the installation executable. If you have a license for it, you just need to save the license file to the same directory where you’ve extracted the ZIP file.

During the installation process, the installer will identify which Visual Studio versions (it supports VS 2008 till VS2013) are installed and allow you to install NDepend as an add-in. One thing I really like was that I was able to install it and activate it (over the menu Tools | Add-in manager) with needing to restart Visual Studio.

Once you’ve activated NDepend, a new menu item will be available in your menu bar. A good place to start is the Dashboard.

Dashboard

The Dashboard shows you an overview of all relevant metrics related to your project as well as their evolution over time:

NDepend Dashboard

It shows some general project metrics like:

  • Number of lines of code
  • Number of types
  • Percentage of lines of comment
  • Max and average Method Complexity
  • Code Coverage
  • Usage of assemblies

But also some metrics related to coding rules. These code rules violations are clustered in critical and non-critical rules. All these metrics are available either as a view of the current status or as a graph showing you the evolution over time. Very often, especially when using NDepend on a project, which has already been around for some time, it is not possible to fix all violations at once and it’s important to be able to see whether we’re slowly reducing the number of violations or whether we have a negative dynamic and the number of violations are actually increasing.

All diagrams can be exported to HTML. This makes it easy to incorporate it in external reports about your project. If like me you’re stuck with an older version of Internet Explorer, you might get an almost empty display when looking at the exported web page. You then just have to open it in another browser. It’d be nice if it also worked in IE8 but let’s face it, web technologies keep evolving at a high pace and you can’t really expect anything to work in a browser, which is already more than 5 years old…

Code Rules

One of the most valuable features in NDepend is that all code rules are based on queries on a model. This means that you can adapt them, as you need i.e. in order to change a threshold or consider additional criteria. So you can adapt existing rules (the ones provided with NDepend) but also add your own rule groups and queries. Of course, that’s something you will only be able to use once you’ve invested enough time in learning how NDepends works. But modifying an existing rule is very easy.

Just as an example: There is a rule called “Methods too big”. It basically scans for methods with more than 30 lines of code. Let’s say, you define that in your project, it’s fine having methods of up to 40 lines of code. You can just click on one of the links in the “Code Rules” widget of the Dashboard:

NDepend Dashboard Code Rules

It will open the Queries and Rules Explorer. On the left hand side, you’ll see all rule groups:

NDepend Dashboard Code Rules Groups

There you can also create some groups or delete them. You also immediately see whether any of the rules in a group returned a warning or not. When you click on one of the groups, you’ll see all related queries on the right hand side:

NDepend Dashboard Code Rules queries

Queries can be activated and deactivated. And you can open the queries and rule editor by clicking on one of the queries.

The editor is a split pane, with the query on the top e.g. in the case of “Methods too big” you will see the following code:

// <Name>Methods too big</Name>
warnif count > 0 from m in JustMyCode.Methods where 
   m.NbLinesOfCode > 30
   // We've commented # IL Instructions, because with LINQ syntax, a few lines of code can compile to hundreds of IL instructions.
   // || m.NbILInstructions > 200
   orderby m.NbLinesOfCode descending,
           m.NbILInstructions descending
select new { m, m.NbLinesOfCode, m.NbILInstructions }

// Methods where NbLinesOfCode > 30 or NbILInstructions > 200
// are extremely complex and should be split in smaller methods.
// See the definition of the NbLinesOfCode metric here 
// http://www.ndepend.com/Metrics.aspx#NbLinesOfCode

And below you will see where a match was found. You will notice that when the violation occurs in an auto-property setter or getter, it is not possible to jump to this line of code by clicking on the link. When asked, the NDepend support answered that the problem is that the PDB file doesn’t provide this info and hence NDepend doesn’t know about the location. So though I do hope that they find a solution for this in the future, you can work around it by clicking on the type name and navigating to the appropriate location. Since none of us should have types with thousands of lines of code, it is no big deal, right? ūüėČ

All queries I’ve looked into seemed to be very well commented. This made it easy to understand what the rule is about and how to modify it to comply with one’s coding rules.

So in our example, in order to change the threshold for the “Methods too big” query, all you need to do is replace 30 by e.g. 40 and save. Additionally, you can press the “Critical” button to mark this rule as a deal breaker i.e. when called during a build, it will return an error code so that you can refuse to build the software if some critical violations are detected (of course I doubt you’ll use it with a “Method too big” violation).

The NDepend online documentation provides a lot of useful information about the query language. Since it’s based on the C# LINQ syntax, you’ll need to be comfortable with LINQ in order to start working with custom rules. But I guess most developers and architects working with C# are familiar with LINQ anyway…

Trend charts and Baselines

Except for the ability to customize and extend the NDepend rule set, another thing I found very useful is the ability to create and work with trend charts. The ability to create baselines for comparison is also a nice feature related to this topic.

Whenever you start working with static code analysis on a project which has already been around for quite some time, you end up getting a huge list of findings. It’s rarely the case that you can fix them all in the short term. What’s really important is to fix the critical violations and make sure that you first do not introduce more violations as your code evolves and also with every new version try and reduce the number of violations (starting with the most important ones).

Trend Charts

In order to create a trend chart, click on the “Create Trend Chart” button at the top of the Dashboard. The following dialog will appear:

NDepend Create Trend Chart

You can give your new trend chart a name choose which series will be defined and how they should look like. Once you save your new trend chart will be displayed in the dashboard.

A few useful trend charts are already displayed by default in the Dashboard:

  • Lines of Code
  • Rules Violated
  • Rules Violations
  • Percentage Coverage by Tests
  • Maximum complexity, lines of code, number of methods for a type, nesting depth…
  • Average complexity, lines of code, number of methods for a type, nesting depth…
  • Third-Party Usage

Using these trend charts, it’s dead easy to get an overview whether you’re going in the right direction or keep making your software more complex and error-

More about how I use NDepend¬†to analyze and improve my code will come in a follow-up article…

 

Downloading files and directories via SFTP using SSH.Net

SSH.NET is a .NET library implementing the SSH2 client protocol. It is inspired by a port of the Java library JSch called Sharp.SSH. It allows you to execute SSH commands and also provides both SCP and SFTP functionality.

In this article, I’ll show you how to download a complete directory tree using SSH.NET.

First you’ll need to add a few usings:

using System;
using System.IO;
using Renci.SshNet;
using Renci.SshNet.Common;
using Renci.SshNet.Sftp;

SSH.NET can be added to your project using NuGet.
In order to work with SFTP, you’ll need to get an instance of SftpClient. You can either directly give details like host, port, username and password or you can provide a ConnectionInfo object. In this example we’ll use a KeyboardInteractiveConnectionInfo. This is required if the server expects an interactive keyboard authentication providing the password. The alternatives are PasswordConnectionInfo and PrivateKeyConnectionInfo.

First we’ll create the ConnectionInfo object:

var connectionInfo = new KeyboardInteractiveConnectionInfo(Host, Port, Username);

Host, Port and Username are constants I’ve defined before.

Then we need to define a delegate which will get the prompts returned by the server and will send the password when requested:

connectionInfo.AuthenticationPrompt += delegate(object sender, AuthenticationPromptEventArgs e)
{
	foreach (var prompt in e.Prompts)
	{
		if (prompt.Request.Equals("Password: ", StringComparison.InvariantCultureIgnoreCase))
		{
			prompt.Response = Password;
		}
	}
};

It waits for the prompt “Password: ” and sends the password I’ve defined in a constant called Password.

Then using this SFTP client, we’ll connect to the server and download the contents of the directory recursively:

using (var client = new SftpClient(connectionInfo))
{
	client.Connect();
	DownloadDirectory(client, Source, Destination);
}

Source is the directory you want to download on the remote server and destination is the local directory e.g.:

private const string Source = "/tmp";
private const string Destination = @"c:\temp";

Now we’ll define the DownloadDirectory method. It will get the directory listing and iterate through the entries. Files will be downloaded and for each directory in there, we’ll recursively call the DownloadDirectory method:

private static void DownloadDirectory(SftpClient client, string source, string destination)
{
	var files = client.ListDirectory(source);
	foreach (var file in files)
	{
		if (!file.IsDirectory && !file.IsSymbolicLink)
		{
			DownloadFile(client, file, destination);
		}
		else if (file.IsSymbolicLink)
		{
			Console.WriteLine("Ignoring symbolic link {0}", file.FullName);
		}
		else if (file.Name != "." && file.Name != "..")
		{
			var dir = Directory.CreateDirectory(Path.Combine(destination, file.Name));
			DownloadDirectory(client, file.FullName, dir.FullName);
		}
	}
}

I am ignoring symbolic links because trying to download them just fails and the SftpFile class provides no way to find what this link points to. “.” and “..” are also ignored.

Now let’s see how to download a single file:

private static void DownloadFile(SftpClient client, SftpFile file, string directory)
{
	Console.WriteLine("Downloading {0}", file.FullName);
	using (Stream fileStream = File.OpenWrite(Path.Combine(directory, file.Name)))
	{
		client.DownloadFile(file.FullName, fileStream);
	}
}

It’s pretty easy: you create a file stream to the destination file and use the DownloadFile method of the SFTP client to download the file.

That’s it ! Here the full code for your convenience:

using System;
using System.IO;
using Renci.SshNet;
using Renci.SshNet.Common;
using Renci.SshNet.Sftp;

namespace ConsoleApplication1
{
    public static class SftpTest
    {
        private const string Host = "192.168.xxx.xxx";
        private const int Port = 22;
        private const string Username = "root";
        private const string Password = "xxxxxxxx";
        private const string Source = "/tmp";
        private const string Destination = @"c:\temp";

        public static void Main()
        {
            var connectionInfo = new KeyboardInteractiveConnectionInfo(Host, Port, Username);

            connectionInfo.AuthenticationPrompt += delegate(object sender, AuthenticationPromptEventArgs e)
            {
                foreach (var prompt in e.Prompts)
                {
                    if (prompt.Request.Equals("Password: ", StringComparison.InvariantCultureIgnoreCase))
                    {
                        prompt.Response = Password;
                    }
                }
            };

            using (var client = new SftpClient(connectionInfo))
            {
                client.Connect();
                DownloadDirectory(client, Source, Destination);
            }
        }

        private static void DownloadDirectory(SftpClient client, string source, string destination)
        {
            var files = client.ListDirectory(source);
            foreach (var file in files)
            {
                if (!file.IsDirectory && !file.IsSymbolicLink)
                {
                    DownloadFile(client, file, destination);
                }
                else if (file.IsSymbolicLink)
                {
                    Console.WriteLine("Ignoring symbolic link {0}", file.FullName);
                }
                else if (file.Name != "." && file.Name != "..")
                {
                    var dir = Directory.CreateDirectory(Path.Combine(destination, file.Name));
                    DownloadDirectory(client, file.FullName, dir.FullName);
                }
            }
        }

        private static void DownloadFile(SftpClient client, SftpFile file, string directory)
        {
            Console.WriteLine("Downloading {0}", file.FullName);
            using (Stream fileStream = File.OpenWrite(Path.Combine(directory, file.Name)))
            {
                client.DownloadFile(file.FullName, fileStream);
            }
        }
    }
}

So except for the issue with the symbolic link, it works pretty good and is also quite fast.

 

The c# project is targeting “.NET Framework,Version=v4.0”, which is not installed on this machine.

Since I needed some space on my hard disk to install some software I went through the list of installed software and saw that Visual Studio 2010 was installed but I actually only use Visual Studio 2012 (or Visual Studio 2005 for some very old stuff which hasn’t been ported yet). So I thought it makes sense to uninstall Visual Studio 2010. And ended up wasting alot of time…

After uninstalling VS 2010, I started getting the following message in VS2012 whenever I opened a project targeted at the .NET framework v4.0:

The C# project xxx is targeting ‘.NETFramework,Version=v4.0″, which is not installed on this machine. to proceed, you must select an option below.

1.Change the target to .NET Framework 4.5…..

2.Download the targeting pack for “.NET Framework, Version = v4.0″…

3.Do not load the project

When you choose option 2 you are redirected to a web page where you actually cannot download the v4.0 .NET framework. Checking in the registry, I saw that the .NET framework was gone. I also found the installation package for the .NET framework v4.0. Unfortunately, it refuses to install since the v4.5 framework is already installed.

So my next great idea was to uninstall the v4.5 framework. Then I could install the v4.0 version. Of course, after that VS 2012 wouldn’t start anymore. So I installed the v4.5 framework again. During the installation, it told me that it’s an in-place update for the v4.0 framework. I immediately thought that it might cause trouble but didn’t have a choice anyway. After installing the v4.5 framework VS2012 worked again but as expected, I still got the same error message when opening projects targeting the v4.0 framework.

Running out of other options, I finally reinstalled VS2010. It took forever but in the end I was able to open all projects in VS2012.

So this is kind of a waste of space but if you cannot have your project target the v4.5 framework but need to target the v4.0 framework, you will need to have VS2010 installed additionally to VS2012 (even though you actually do not need VS2010).

If anybody has found a solution not requiring VS2010 to stay installed, please leave a comment. My hard disk space is running low and I’d love to be able to get rid of VS2010.

Orchard CMS: NullReferenceException when adding roles

When clicking on “Add a role” in the Users administration, I got the following exception:

System.NullReferenceException: Object reference not set to an instance of an object.
at Orchard.Roles.Services.RoleService.GetInstalledPermissions()
at Orchard.Roles.Controllers.AdminController.Create()
at lambda_method(Closure , ControllerBase , Object[] )
at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)
at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass13.<InvokeActionMethodWithFilters>b__10()
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)
at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass13.<>c__DisplayClass15.<InvokeActionMethodWithFilters>b__12()
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)
at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass13.<>c__DisplayClass15.<InvokeActionMethodWithFilters>b__12()
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)

I found a reported issue which did look similar. It was reported against Orchard CMS 1.7 and was marked as resolved. I am using Orchard CMS 1.7.2. Unfortunately, the issue details neither show in which version it was solved nor what the actual root cause was. Since it was closed by S√©bastian who’s an Orchard Core developer and was actually born in the same city as I was, I could have contacted him but in-between I found out what the problem was.

Actually my problem was caused by a module I am working on. When disabling the module, it worked fine and when reactivating it, it is broken again.

The problem was in the Permissions.cs file. There were basically two problems. Basically what Orchard does when you click on “Add a role” is to get all features and their permissions. The two problem I had were:

  1. The GetPermissions() method did not returned all permissions I had defined and was returning in GetDefaultStereotypes()
  2. ReSharper had suggested me to make the setter for the Feature property private since it was not accessed anywhere.

But fixing the first one alone didn’t solve anything. I guess it was necessary to fix it but the root cause of the problem was the private setter for the Feature property. Once I made it public again it worked fine:

public Feature Feature { get; set; }

So the lesson here is that especially when working with such a system as Orchard CMS you should not blindly implement changes in the visibility of properties or methods suggested by ReSharper/Visual Studio. Since properties and methods are rarely directly referenced, your tools will very often miss some dependencies.

 

 

 

Active Directory Authentication and Authorization in Orchard CMS

Since Orchard CMS doesn’t (yet) support authentication and authorization of domain users against an Active Directory, you have to install a module to achieve this. There are handful of modules which could help. I decided to use ActiveDirectoryAuthorization by Moov2 because it was the only one which had a decent number of downloads, reviews and a project site.

If you decide to use this module, you’ll first notice that there isn’t any complete documentation how to adapt your system so that the authentication and authorization works over an Active Directory. But there is a blog article which gives some instructions. Unfortunately, the instructions seem not to be complete.

Basically when it comes to the changes to be made in your web.config, the blog post says you should “simply replace the current Forms authentication settings with the authentication settings shown below”:

    <authentication mode="Windows" />
    <roleManager enabled="true" defaultProvider="AspNetWindowsTokenRoleProvider"/>

Unfortunately, only with this change, whenever I entered my credentials, I used to get the same dialog over and over. What’s missing here, is that you also need to add an authorization tag, thus replacing:

    <authentication mode="Forms">
      <forms loginUrl="~/Users/Account/AccessDenied" timeout="2880" />
    </authentication>

by:

    <authentication mode="Windows"/> 
    <roleManager enabled="true" defaultProvider="AspNetWindowsTokenRoleProvider"/> 
    <authorization>
	    <allow roles="aw001\Domain Users"/>
	    <deny users="?"/>
    </authorization>

Of course, you have to replace aw001 by your domain name.

The question mark in the deny tag basically means that anonymous users will be denied access and the allow tag that all Domain Users of this particular domain will be granted access.

After that, Orchard just gave me a white page. So at least something was activated… In the logs, I found the following exception:

2014-09-25 11:36:01,653 [6] Orchard.Environment.DefaultBuildManager – Error when compiling assembly under ~/Modules/ActiveDirectoryAuthorization/ActiveDirectoryAuthorization.csproj.
System.Web.HttpCompileException (0x80004005): c:\inetpub\wwwroot\orchard\Modules\ActiveDirectoryAuthorization\Core\Authorizer.cs(144): error CS1061: ‘Orchard.ContentManagement.IContentManager’ does not contain a definition for ‘Flush’ and no extension method ‘Flush’ accepting a first argument of type ‘Orchard.ContentManagement.IContentManager’ could be found (are you missing a using directive or an assembly reference?)
at System.Web.Compilation.AssemblyBuilder.Compile()
at System.Web.Compilation.BuildProvidersCompiler.PerformBuild()
at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath)
at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate)
at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate)
at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean ensureIsUpToDate)
at System.Web.Compilation.BuildManager.GetCompiledAssembly(String virtualPath)
at Orchard.Environment.DefaultBuildManager.GetCompiledAssembly(String virtualPath) in Orchard\Environment\IBuildManager.cs:line 53

I could see the line of code where this was done but still wasn’t sure what I had to do. So I googled for it. There was exactly one hit. Somehow, it looks like someone had the same problem with a completely unrelated module. This problem was solved in this module and I checked what was the code change. It turns out they only removed the call to ContentManager.Flush(). So I gave it a try, editing ActiveDirectoryAuthorization\Core\Authorizer.cs and commenting out the following line in the CreateUser method:

_contentManager.Flush();

After that I could log in.

The other problem I had was that my domain user didn’t have the permissions I thought I had assigned. The problem is that I created a role with the same name as a group of this user in Active Directory but didn’t add the domain name to it i.e. I called my role myusergroup instead of aw001\myusergroup. After correcting it, it worked fine.

When logging in with a domain user, an Orchard User is created. You do not see in the Orchard administration that this user has the role you’ve created (which is called the same as an Active Directory group) but when considering the roles of the user for checking the permissions, now Orchard will use both the roles assigned in Orchard and the groups assigned to the user in the Active Directory. Great !

 

C#: Query active directory to get a user’s roles

There are a few different ways to get the roles/groups of user from Active Directory. Here are 3 different ways to do it.

The first way to do it is to use UserPrincipal.FindByIdentity:

private static IEnumerable<string> GetGroupsFindByIdentity(string username, string domainname, string container)
{
	var results = new List<string>();
	using (var context = new PrincipalContext(ContextType.Domain, domainname, container))
	{
		try
		{
			UserPrincipal p = UserPrincipal.FindByIdentity(context, IdentityType.SamAccountName, username);
			if (p != null)
			{
				var groups = p.GetGroups();
				foreach (var group in groups)
				{
					try
					{
						results.Add(@group.Name);
					}
					catch (Exception ex)
					{
					}
				}
			}
		}
		catch (Exception ex)
		{
			throw new ApplicationException("Unable to query Active Directory.", ex);
		}
	}

	return results;
}

You can then print the roles using:

var groups = GetGroupsFindByIdentity("benohead", "aw001.amazingweb.de", "DC=aw001,DC=amazingweb,DC=de");
foreach (var group in groups)
{
	Console.WriteLine(group);
}

Another way to do it is to use a DirectorySearcher and fetching DirectoryEntries:

private static IEnumerable<string> GetGroupsDirectorySearcher(string username, string container)
{
	var searcher =
		new DirectorySearcher(new DirectoryEntry("LDAP://" + container))
		{
			Filter = String.Format("(&(objectClass=user)(samaccountname={0}))", username)
		};
	searcher.PropertiesToLoad.Add("MemberOf");

	var directoryEntriesFound = searcher.FindAll()
		.Cast<SearchResult>()
		.Select(result => result.GetDirectoryEntry());

	foreach (DirectoryEntry entry in directoryEntriesFound)
		foreach (object obj in ((object[]) entry.Properties["MemberOf"].Value))
		{
			string group = Regex.Replace(obj.ToString(), @"^CN=(.*?)(?<!\\),.*", "$1");
			yield return group;
		}
}

The regular expression is required in order to extract the CN part of the returned string.

var groups = GetGroupsDirectorySearcher("benohead", "DC=aw005,DC=amazingweb,DC=de");
foreach (var group in groups)
{
	Console.WriteLine(group);
}

The third way to do it is to use a WindowsIdentity:

private static IEnumerable<string> GetGroupsWindowsIdentity(string userName)
{
	var results = new List<string>();
	var wi = new WindowsIdentity(userName);

	if (wi.Groups != null)
	{
		foreach (var group in wi.Groups)
		{
			try
			{
				results.Add(@group.Translate(typeof (NTAccount)).ToString());
			}
			catch (Exception ex)
			{
				throw new ApplicationException("Unable to query Active Directory.", ex);
			}
		}
	}
	return results;
}

You can then print the roles using:

var groups = GetGroupsWindowsIdentity("benohead");
foreach (var group in groups)
{
	Console.WriteLine(group);
}

You might notice that this last option seems to return more groups than the other two options. I’m not yet sure why. I’ve tested it with multiple users and saw that it does return different groups but for some reason, it also returns groups not returned by any other method. So for now I’ll rather stick to the first or second method.