XML Serialization for interfaces in .NET

In your quest to find out how you can support XML serialization for types that contain interfaces, you may often find yourself coming to the same answer: you cannot serialize interfaces. That is true, but you can work around it, and I will present two methods.

Write you own Xml serialization code

Refer to : http://social.msdn.microsoft.com/Forums/en-US/bac96f79-82cd-4fef-a748-2a85370a8510/xmlserialization-with-interfaces?forum=asmxandxml

This post provides a nice answer to your woes. But you should watch out for a gotcha: 

strType = m_Child.GetType().FullName;

Instead you may want to use this in case your dependant interface implementations could come from an external assembly:

string strType = m_Child.GetType().AssemblyQualifiedName 
	?? m_Child.GetType().FullName;

Use generics with constraints

You can get away with not having to write your own serialization code by using generics. This isn’t a silver bullet as it will effect your class design, and may not make sense. Take this example:

public class MyClass 
	public IMyInterface MyProperty { get; set; }

You could rewrite this to become:

public class MyClass <T> where T : IMyInterface
	public T MyProperty { get; set; }

Of course this has several implications. For example, instantiating these types become more complicated and could lead to tricker wiring/construction problems.  Another point to make is the class itself doesn’t guarantee it is XML serializable: if the class is declared with type (T) of an interface (or a non-serializable class) then it won’t be XML serializable. But this should nothing to be surprized about, the .Net frameworks  XML serializable generic types also behave like this (e.g. List<T>).

Here is another example:

public class MyCollection
	public IList<IMyInterface> MyProperty { get; set; }

Generic XML serializable version (provided T is XML serializable):

public class MyCollection <T> where T : IMyInterface
	public IList<T> MyProperty { get; set; }

Non-blocking Writer Collection (C# Example)

A question was posed to me recently:

If you had a thread that produced messages, and pushed those messages one or more consumer threads: how would you write the code to ensure that the producer thread executes as fast as possible? (I.E. no blocking on the producer thread).

An interesting problem to solve! Before looking at the suggested solution below, try and have a go at coming up with your own solution.

Let me present one way to go about this problem. The general idea is to use two message queues: a write queue, and a read queue. The writer (producer) thread always writes to the write queue. The reader thread always reads from the read queue. When the reader thread exhausts the read queue, the reader switches the queues: the write queue becomes the read queue, and vice versa. If the reader finds that the new reader queue (after the switch) is also empty, then it waits until the writer thread writes to the write queue. The reader thread then switches the queues again, and reads from the new read queue to extract the most recent item.

Note: I’ve choicely chosen the name “Collection” for this ADT, since the implementation does not guarantee ordering. It is sort of FIFO, but not strictly (otherwise I would of called it a NonBlockingWriterQueue!). Here is a C# example:


public class NonBlockingWriterCollection<T> : IDisposable {

	private Queue<T> leftQueue = new Queue<T>();
	private Queue<T> rightQueue = new Queue<T>();

	private volatile bool isWriting = false;
	private volatile bool writeToLeft = true;
	private object readerLocker = new object ();
	private EventWaitHandle readerWaitHandle = new AutoResetEvent (false);

	private volatile bool disposed = false;


	#region IDisposable
	public void Dispose()

	protected virtual void Dispose(bool disposing)
		if(!disposed) {
			disposed = true;
			// Signal reader to wake up. Since closing/disposing the event handle doesnt raise
			// object disposed exceptions for threads in waiting states.
			readerWaitHandle.Set (); 
			if(disposing) {
				readerWaitHandle.Close (); // Can cause in-progress writes to throw a disposed exception.


	/// <summary>
	/// Enqueues the specified item.
	/// </summary>
	/// <param name="item">Item to enqueue</param>
	/// <exception cref="ObjectDisposedException">If the queue has been disposed prior to executing this methed</exception>
	/// <remarks>
	/// Only a single writer thread can enqueue.
	/// This operation is non-blocking.
	/// </remarks>
	public void Write(T item) {
		if (disposed)
			throw new ObjectDisposedException ("NonBlockingWriterQueue");

		try {
			isWriting = true;

			// Queue an item on the write queue

			// Signal reader thread that an item has been added

		} finally {
			isWriting = false;


	/// <summary>
	/// Dequeues an item.
	/// </summary>
	/// <exception cref="ObjectDisposedException">If the queue has been disposed prior-to or during executing this methed</exception>
	/// <remarks>
	/// Multiple reader threads can attempt to dequeue an item.
	/// This operation is blocking (until an item has been enqueued, or the collection has been disposed).
	/// </remarks>
	public T Read() {
		lock (readerLocker) {
			if (disposed)
				throw new ObjectDisposedException ("NonBlockingWriterQueue");

			// Reset the wait handle, at this point we are searching for an item on either queue
			readerWaitHandle.Reset ();

			// Dequeue an item from the queue that is not being written to
			var readQueue = writeToLeft ? rightQueue : leftQueue;
			if (readQueue.Count > 0)
				return readQueue.Dequeue ();

			while (!disposed) {
				// The read queue has been exhausted. Swap read/write queue
				writeToLeft = !writeToLeft;

				// At this point, the writer thread could be writing to either queue, 
				// wait for the write to finish using a spin lock
				while (isWriting) {
				} // busy waiting

				// Try read again from the read queue
				readQueue = writeToLeft ? rightQueue : leftQueue;
				if (readQueue.Count > 0)
					return readQueue.Dequeue ();

				// Both lists have been exhausted, we need to wait for the writer to
				// do something. Block the reader until the writer has signalled.
				// Note: it may have added an item during the read... so this may
				// not block, and continue the read right away
				readerWaitHandle.WaitOne ();

			throw new ObjectDisposedException ("NonBlockingWriterQueue");

If you know you are only going to have a single reader, then you can simply remove the reader mutex to improve performance.

Note the uses of volatile members. Volatile read/writes are required for these primitive members, otherwise all-sorts of chaos could happen should the compiler choose to re-arrange/cache the read/write instructions.

And here is a little test harness:

private const int ReaderCount = 5;
private NonBlockingWriterCollection<int> nbQueue;

public void Run ()
	Console.WriteLine ("Starting sim...");
	Thread[] readerThreads;
	Thread writerThread;
	using(nbQueue = new NonBlockingWriterCollection<int>()) {

		readerThreads = new Thread[ReaderCount];
		for(int i = 0; i < ReaderCount; i++) {
			readerThreads [i] = new Thread (RunReader);
			readerThreads [i].Start (i); // box i

		writerThread = new Thread (RunWriter);
		writerThread.Start ();

		Thread.Sleep (1000 * 5);

	Console.WriteLine ("Waiting for sim threads to finish...");
	writerThread.Join ();
	foreach (var rt in readerThreads)
		rt.Join ();

	Console.WriteLine ("Finished sim.");

private void RunReader(object threadNum) {
	var rand = new Random (((int)threadNum) * 6109425);
	string threadId = "ReaderThread " + (char)('A' + (int)threadNum); // unbox i
	try {
		while (true) {
			var item = nbQueue.Read (); // blocking
			Console.WriteLine (threadId + " read " + item);
			Thread.Sleep(rand.Next() % 10); // Do some "work" to process the data
	} catch(ObjectDisposedException) {}

private void RunWriter() {
	string threadId = "WriterThread";
	var rand = new Random ();
	try {
		while (true) {
			for(int i = 1; i < rand.Next () % 10; i++) {
				var item = rand.Next ();
				Console.WriteLine (threadId + " writing " + item);
				nbQueue.Write (item); // non-blocking
			Thread.Sleep(10 + rand.Next() % 100); // Do some "work" to produce more data
	} catch(ObjectDisposedException) {}

Programming Praxis – Coin Change, Part 1, C# Solutions

This post presents C# solutions to a coin change problem as described in http://programmingpraxis.com/2013/05/17/coin-change-part-1.

It is a classic problem from computer science to determine the number of ways in which coins can be combined to form a particular target; as an example, there are 31 ways to form 40 cents from an unlimited supply of pennies (1 cent), nickels (5 cents), dimes (10 cents) and quarters (25 cents), ranging from the 3-coin set (5 10 25) to the 40-coin set consisting only of pennies.

Your task is to write two functions, one to determine the count and one to determine the list of coins. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

As the original post for this problem states, it appears to be most easily solvable using a recursive approach. Let’s start the thought process of solving such a problem like this. I naturally would begin by figuring out a systematic process of coming up with all the possibilities of combinations. Once the process is devised, you can the deduce an algorithm.

Start out with a simple concrete example. Naturally I thought: “at each step in the process, keep subtracting the largest allowable coin from the remaining total, until you reach a solution”. A coin would be allowable if it is equal or less than the remaining total. So if n = 40:

n = 40, set = []

Step 1: The largest allowable coin is 25 (25 < 40)

n = 15 (40 - 25), set = [25]

Step 2: The largest allowable coin is 10 (10 < 15)

n = 5 (10 - 15), set = [25, 10]

Step 3: The largest allowable coin is 5 (5 < 10)

n = 0 (5 - 5), set = [25, 10, 5]


Where to from here? Well lets enumerate all solutions with the 25 coin:

25, 10, 5

25, 10, 1, 1, 1, 1, 1

25, 5, 5, 5

25, 5, 5, 1,1,1,1,1

25, 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1

25, 1,1,1,1,1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1

For each successive solution, we are breaking down each coin from the previous solution in as many ways as possible. However you wouldn’t want to break down the same coin in a previous solution more than once. e.g. for set [25,5,5,5] you wouldn’t want break down the coin 5 any more than once: [25, 5, 5, 1,1,1,1,1], [25, 5, 1,1,1,1,1, 5], [25, 1, 1, 1, 1, 1, 5, 1, 1, 1, 1, 1] etc. How do you avoid this? Look at the concrete example of coin sets above. Notice any patterns? There are a few. In relation to these cases we want to avoid, we could avoid duplicating coin sets by ensuring that for each breakdown the new coin selection is less or equal to the previous step’s selected coin (see the concrete example above, all the coins are ordered descendingly). Lets code it using recursion:

public class CoinChange1
	private int[] cs = new [] {25, 10, 5, 1};

	private List<ICollection<int>> solutions = new List<ICollection<int>> ();

	public static IList<ICollection<int>> GetCoinSets(int total) {
		// Handle corner case outside of recursive solution
		if (total == 0)
			return new List<ICollection<int>> ();

		// Get all possible sets
		var cc = new CoinChange1 ();
		cc.GetCoinSets (total, 0, new Stack<int>());
		return cc.solutions;

	private void GetCoinSets(int n, int csi, Stack<int> combo) {
	        // Find largest allowable coin (exploiting that cs is ordered descendingly)
		while (cs[csi] > n)
		int c = cs [csi];

		combo.Push (c); // Track coin selection
		if (n == c)
			solutions.Add(combo.ToArray()); // Base case
			GetCoinSets (n - c, csi, combo); // Recursion 1: with a selected coin
		combo.Pop ();

		// Recurse for all other possibilities beyond this point with a combo of smaller coin units
		if(csi < (cs.Length - 1))
			GetCoinSets (n, csi + 1, combo);

And there we have it. This problem does have overlapping subsolutions, and so it could possibly be optimized using memoization. However this would add a lot of overhead: as it would mean caching collections of all possible coin subsets, and many memory copy operations to build up new sub-sets off other sub solutions. Sounds more costly than being worthwhile. So we will stick with this solution.

But we are not finished: the problem statement said to write another function to count the total possible coin sets for a given total (n). A naive approach would be to re-use our solution above, and return the length of the collection. But we can do better:

        private int GetCoinSetCount(int n, int csi) {
		while (cs[csi] > n)
		int c = cs [csi];

		int count;
		if (n == c)
			count = 1;
			count = GetCoinSetCount (n - c, csi);

		if(csi < (cs.Length - 1))
			count += GetCoinSetCount (n, csi + 1);

		return count;

I’ve omitted a wrapper function to neatly package the API (passing 0 for csi argument and a sanity check for total = 0). The performance has improved since we don’t need to maintain a list of selected coins throughout the recursive calls. But we can still do better!

Now that we don’t have to deal with passing around complex data per sub solution (lists), and so memoization is worthwhile here:

public class CoinChange1
	private Dictionary<ulong, uint> cache = new Dictionary<ulong, uint>();

	public static uint GetCountSetCount(uint total) {
		if (total == 0)
			return 0;
		return new CoinChange1().GetCoinSetCount (total, 0);

	private uint GetCoinSetCount(uint n, byte csi) {
		while (cs[csi] > n)
		uint c = (uint)cs [csi];

		uint count;
		if (n == c) {
			count = 1;
		} else {
			uint nextn = n - c;
			ulong key = ((ulong)nextn << 32) | (ulong)csi;
			if(!cache.TryGetValue(key, out count)) {
				count = GetCoinSetCount (nextn, csi);
				cache[key] = count;

		if(csi < (cs.Length - 1)) {
			ulong key = ((ulong)n << 32) | ((ulong)csi + 1);
			uint subCount;
			if(!cache.TryGetValue(key, out subCount)) {
				subCount = GetCoinSetCount (n, (byte)(csi + 1));
				cache[key] = subCount;
			count += subCount;
		return count;

I’ve changed the arguments and return types to unsigned, since I’m making a performance enhancement for the cache key. Since the key is composite (the arguments to the recursive functions), I pack them in a single unsigned long rather than using a Tuple or some custom class/struct. A petty improvement but an improvement nonetheless.

Inferring the Locale text direction with Javascript

Here’s a hack to detect which is the locale text direction (e.g. left-to-right for English or right-to-left for Hebrew) using JavaScript:

 * Detects locale text direction...
 * @return {String} "ltr" or "rtl"
function detectTextDir() {
   var container = document.createElement("p");
    container.style.margin = "0 0 0 0";
    container.style.padding = "0 0 0 0";
    // container.style.textAlign = "start"; // If CSS 3+ 
    container.style.textAlign = ""; // Explicitly override text align that might be assigned via style sheets
    var span = document.createElement("span");
    span.innerHTML = "X";
    // LTR if text position is nearer left of container, RTL if text position is nearer right of container
    var localeDirection = span.offsetLeft < (container.offsetWidth - (span.offsetLeft + span.offsetWidth)) ?
        "ltr" : 

    return localeDirection;

A limitation is that it doesn’t work if there is a style sheet present which explicitly assigns a text-align value other than “start” for paragraphs directly in the document body. You could probably tweak the idea to use something more obscure like an address block to reduce chances of this happening.

Accessing the System Clipboard with JavaScript – A Holy Grail?

I am developing an API written in JavaScript for a project which requires the ability to copy data to, and retrieve data from, a clipboard within a web browser. A simple/common problem definition – but due to tight browser security, finding a solution is a bit of a nightmare. This article outlines and discusses a number of approaches for implementing a clipboard feature into your JavaScript applications.

Continue reading