GitVersion and TeamCity – Configuring GitVersion For GitHubFlow

In the previous post I outlined how to get GitVersion up and running in TeamCity, with a basic version number that looks like “0.1.0+27”, where the final digit is an incrementing counter showing code changes (commits to master). It’s something but it’s not great. It technically works with a “GitHubFlow” workflow, but doesn’t give me full control over versioning.

Back to my wishlist of requirements – I want version info that:

  1. Allows for a semantic version, with “major.minor.patch” components
  2. Lets me control when the major/minor build numbers change
  3. Has a “patch” component that is incremented as the code changes
  4. Is “baked in” to the release code so I can display it on the app (UI footer, “info” API method, etc)
  5. Is displayed on the build server dashboard, so I can track a release through the build pipeline from CI to Test to Live
  6. As a bonus, will point to the specific code commit that represents the actual live code

I’m using TeamCity as my build server, but the majority of this post relates to configuring GitVersion to control version number format.

Default GitVersioning

The default behaviour of Git Version is to create a version number like “0.1.0+27”, where 27 is the number of commits. If all you did was to add a GitVersion build step, that’s what you get. Try it. Set up a basic build and observe the generated version number. If you look at the AssemblyInfo.cs files, you’ll see the assembly and file versions are “0.1.0.0”, and the informational version is a big long string of “0.1.0+27.Branch.master.Sha.485…”. Not great, but at least you have some version and source control reference info you could use. Dig into the build logs for your GitVersion step and you’ll see that Git Version generated a bunch of properties. We want to try use these to get better quality version numbers.

A Better Version Number Format

OK, so “0.1.0+1234” isn’t a great version number format. You probably want something nicer, maybe some semantic versioning. Thankfully, there’s a simple trick you can do – adding a GitVersion.yml config to your code.

I’m going to use an approach where GitVersion and configuration within the code iteself controls the versioning – no need for the build server (eg. TeamCity parameters) or branching/tagging to be used to control the major/minor versions.

GitVersion Configuration Experiments

You can run GitVersion locally – just get the GitVersion.exe installed and added to your path. Because I have a test TeamCity instance on my development PC, I have a gitversion.exe in my “chocolatey” directory and on my path as a result of the TeamCity meta-runner setup.

Open a command prompt in your local working directory copy of the repository (the one with the “.git” file).

Start by running a simple “GitVersion” to see all the values that you can get.

We want to create a GitVersion.yml configuration file for our project that specifies how GitVersion should work. As a starting point, Gitversion has a nice little wizard that will create one for us.

Do a “gitversion init”, and up pops the wizard. So many choices! Let’s try “2) Run get started” and see what happens…

This is where I had to go with some trial-and-error. I’m using a simple “GitHubFlow” approach where releases are regularly cut from master, but all work is done on short-lived feature branches.

I also tried experimenting with “GitFlow” only to be presented with a message about the “develop” branch. You can use CTRL-C and start again, or the “Unsure tell me more” option. Note that their definitions of continuous delivery (tagged branches) versus continuous deployment might look a bit odd, and seem to be built for teams using git branching to control releases flowing through the CD pipeline.

Anyway, I went for “GitHubFlow”, “increment every commit”, saved my changes (option “0”) and ended up with a simple GitVersion.yml file:

mode: ContinuousDeployment
branches: {}
ignore:
  sha: []

When I just run gitversion again, or commit my changes and re-run the build, I get a version number that looks like “0.1.0-ci.12” (it was “0.1.0+11”).

What I really want is something more like this:

assembly-versioning-scheme: MajorMinor
mode: Mainline
branches: {}
ignore:
  sha: []

This give me a “0.1.9” version number. Instant observation – the generated “commit count” number has actually decreased (but we did select “Mainline” mode, so we’re probably only counting commits to master, not other branching/merging activities). A quick check of the generated “informational” version has the full “0.1.9+Branch.master.Sha.722…” info.

Experimenting with build specifics by re-running builds on the build server is inefficient, so it’s back to the command prompt (in your local working directory).

Type “gitversion” and you get a list of all the various values GitVersion calculates for your code.

The “AssemblySemVer” of “0.1.0.0” might be an issue. Changing the assembly-versioning-scheme to “MajorMinorPatch” will update “AssemblySemVer” to “0.1.9.0”.

So the last thing we need to do is control the major/minor version. For that, I can use the “next-version” property, and add the following entry:

next-version: 1.2

You can edit your GitVersion.yml to customise the format used in AssemblyInfo values by setting “assembly-versioning-format”, “assembly-file-versioning-format” and “assembly-informational-format”.

For example, I set a more concise informational format using

assembly-informational-format: '{Major}.{Minor}.{Patch}.{Sha}'

Now I have an extended version number with a reference to the code, so if I need to retrieve the exact code that’s running in production, I can.

So I added a GitVersion.yml file that looks like:

assembly-versioning-scheme: MajorMinorPatch
mode: Mainline
next-version: 1.2
assembly-informational-format: '{Major}.{Minor}.{Patch}.{Sha}'
branches: {}
ignore:
  sha: []

I have a simple “major.minor.patch” version format, where I can control the major/minor versions, and have the patch counter auto-incremented on every commit or merge to master. The version format I’m using works for my build pipeline. But with a little experimentation with the GitVersion.yml, you can tweak the version number generation to work for a different workflow.

Summary

So to get all this to work with a standard TeamCity build, I needed to do the following:

  1. Configure GitVersion in TeamCity (Meta Runner for the .exe and a couple of project parameters)
  2. Configure GitVersion for my project by adding a GitVersion.yml file into the code
  3. Add GitVersion build step to generate a version and bake it into AssemblyInfo.cs files

My requirements were to setup GitVersion for a “GitHubFlow” workflow. With some reference documentation and experimentation, you can use GitVersion for your chosen workflow.

Branching Gotchas

So far we’ve only covered building on the master branch. If you have builds on other branches (I like to set up CI on all the PR branches so I can have code checked before it gets merged into master) then you may run into the following:

  • You might need to add an additional environment variable (parameter in your TeamCity project or build configuration) and set “env.IGNORE_NORMALISATION_GIT_HEAD_MOVE” to “1”
  • The version patch count on the “other” branch might appear a little out (counting commits on master is the important task)

GitVersion 3 vs GitVersion 4

While I was experimenting with better GitVersion settings, GitVersion upgraded to version 4. I noticed a few differences:

  • Mainline mode seems to be new in version 4
  • I had a bunch of builds suddenly fail with “GitTools.Core has a bug, your HEAD has moved after repo normalisation” error, fixed by adding an environment variable parameter to the build config with name “IGNORE_NORMALISATION_GIT_HEAD_MOVE”, value “1”

GitVersion and TeamCity

One advantage of old “centralised” version control systems over Git is that commits have useful sequential numbers. Assigning version numbers automatically as the code changes (as opposed to just numbering builds) becomes tricky with Git.

My aim is to have a build process generate meaningful version number/info that:

  1. Allows for a semantic version, with “major.minor.patch” components
  2. Lets me control when the major/minor build numbers change
  3. Has a “patch” component that is incremented as the code changes
  4. Is “baked in” to the release code so I can display it on the app (UI footer, “info” API method, etc)
  5. Is displayed on the build server dashboard, so I can track a release through the build pipeline from CI to Test to Live
  6. As a bonus, will point to the specific code commit that represents the actual live code

Thankfully, there’s a tool – Git Version – that will generate more meaningful version numbers, including a sequential count of code commits.

This post covers TeamCity building .Net, but you should be able to get Git Version to work with other build tools and spit out version numbers you can use. For .Net, we need to apply the generated version number to the AssemblyInfo.cs files before compiling, but this “generate version and write to file” approach can be adapted for other languages.

Initially, I was using TeamCity’s built-in “Assembly Info Patcher” build feature, and just feeding it values of “%Major.version%.%Minor.version%.%build.counter%.0” and “%Major.version%.%Minor.version%.%build.counter%.%build.vcs.number%” for the version/file and informational numbers (I used parameters for the major and minor versions to control the semantic versioning). By default, TeamCity’s “build.vcs.number” property only gives you the commit SHA.

For “something better”, I need to configure TeamCity and GitVersion to give me “something meaningful”, then I need to tweak the versioning to match my workflow.

Setting up TeamCity and GitVersion

First download – for TeamCity use https://github.com/JetBrains/meta-runner-power-pack/tree/master/gitversion. You can just download the repository as a zip, and then unzip the archive somewhere.

GitVersion runner needs to be added to a project as a Meta Runner – you can add this to a single project, but it’s best to add to the root project.

So go to the “<Root project>”->Meta Runners, and upload

Filename: “MR_GitVersion3.xml”
File: browse to “MR_GitVersion3.xml”

(Look in the meta-runner-power-pack-master/gitversion folder).

Check by editing the meta-runner – you will see the following:

ID: MR_GitVersion3
Name: GitVersion3
Description: Execute GitVersion 3

And the source will be the content of the XML file you uploaded. You can now delete the local copy of the meta runner you downloaded. You can also set GitVersion to check for updates.

If you want to use a different build server, start with the GitVersion project code at https://github.com/GitTools/GitVersion. As you’ll see later, all you need is a GitVersion executable.

Versioning a Build

I’m assuming you have a basic build configuration setup that grabs code from Git (VCS root settings) and compiles the code (build step), and that your build doesn’t yet apply a version number. Hopefully your build will end up doing other things (eg. running unit tests, creating a release package).

Run the build, observe the output (see build logs for agent and working directory location) and you’ll see that all the newly-created DLLs have whatever default version number you put in the AssemblyInfo.cs files (check DLL properties->details).

Now we have an “unversioned” build, let’s try stamping a version number. You’ll need to add a build step to your build configuration – this should be the first step, before you do anything that requires a version number (compilation, packaging, etc). Important note – don’t use a TeamCity AssemblyInfoPatcher build feature if you’re using GitVersion.

Edit your build step, and set runner type “GitVersion3”. A remote git password is added when you first add the build step (checking the config in “ProgramData\JetBrains\TeamCity\config\projects\ProjectName\buildTypes\BuildConfig.xml” shows that this is a generated password).

For .Net, you’re going to want to check “Update AssemblyInfo files”.

Run this and TeamCity will complain about a “parameter called ‘env.Git_Branch’ with value %teamcity.build.vcs.branch.<vcsid>%”.

GitVersion needs some parameters set up for your build, so go to the “Parameters” tab and set up two parameters:

Environment variable: “env.Git_Branch”, value “%teamcity.build.vcs.branch.<vcsid>%”

Configuration parameter: “teamcity.build.vcs.branch.<vcsid>”, value “master”

So if you run this, then you’ll see that both the AssemblyInfo.cs and the DLL now have the actual generated version numbers. Unfortunately, they’re probably not the version numbers you’re looking for…

In a follow-up post I’ll cover how to configure GitVersion to spit out the version info and format you actually want.

Summary – so far we have:

  1. Set up CI build to grab code from Git
  2. Added build step to compile code
  3. Added GitVersion build step to generate a version and bake it into AssemblyInfo.cs files

Showing the GitVersion Build Number on the TeamCity Build Dashboard

You probably want to have TeamCity use this generated version number as the displayed build number, rather than an arbitrary build counter. This is especially useful if you go for a “build artifact once, deploy repeatedly” pipeline approach, rather than a branch-based / re-build on deployment approach. Go into the build configuration general settings and check the build number format. I’ve actually found that both “%build.counter%” and “%build.vcs.number%” seem to work with GitVersion.

Handling Cache Lookups – A “Command Pattern” Recipe

This is a little recipe I’ve used in a number of projects where we needed some basic re-usable caching code to allow reliable and easy-to-use caching operations. The code is in C# and designed to handle caching of repeat calls to “slow” operations, but you could use this pattern for other purposes.

Typically, your cache implementation (eg. Redis) is nothing more than a very fast dictionary lookup, and provides basic “get”, “set” and “delete” operations. Stale data is usually handled by purging items from the cache after a set time, and items may also disappear from the cache early by a reset or heavy load forcing the older items out first. So whenever you go looking for an item in the cache, you have to anticipate it not being there.

Every cache operation has some standard code wrapped around a specific operation. The typical cache lookup workflow for fetching an item from the cache would be:

  1. Generate a cache key representing the item
  2. Perform cache lookup for an item matching the key, if found return it
  3. If the item is not found
    1. Go fetch it (database query or other slow operation)
    2. Insert it into the cache for next time

If you want to cache operations for speed, you could simply implement this logic for every cache lookup and have a load of “copy and paste” code. But there are better approaches.

I have used this caching pattern when dealing with Redis and my own basic “in memory” mock cache implementations. The point of this post isn’t the detail of the interaction with a specific cache implementation or the individual low-level cache operations, but how you can approach writing code for “get the information from the cache if you can, else fallback to a slow database operation” scenarios.

An Example Application (Without Caching)

To start with, I’m going to pretend I have a really simple MVC app that stores and fetches “products”. There’s a UI layer (MVC), a business layer (ProductService) and a data layer (repository). Mostly, I’m going to ignore the UI and data layer implementation, and focus on the business layer. I don’t care whether the “data” layer is a database, a file, an in-memory lookup or API call to some other service. My assumption is that the data layer is a good candidate for caching – slow fetch operations for data that is repeatedly requested.

I’m going to assume that we have decided to try and cache the “get one product” and “get all products” operations, and that we don’t need to worry about exceptions because nothing will ever go wrong.

So we start with a simple Product model, a controller, and some interfaces specifying how the business and repository services will work.


    public class Product
    {
        public int? Id { get; set; }
        public string Name { get; set; }
        public string Supplier { get; set; }
        public DateTime Date { get; set; }
        public string Details { get; set; }
    }

    public class ProductController : Controller
    {
        private readonly IProductService _productService;

        public ProductController(IProductService productService)
        {
            _productService = productService;
        }

        public ActionResult Index()
        {
            var model = _productService.All();
            return View(model);
        }

        public ActionResult Product(int id)
        {
            var model = _productService.Get(id);
            return View(model);
        }
    }

    public interface IProductService
    {
        Product Get(int id);

        void Add(Product product);

        Product[] All();
    }

    public interface IProductRepository
    {
        Product Get(int id);

        void Add(Product product);

        Product[] All();
    }

The simplest thing that could possibly work is “no caching”. So let’s start with that and just forward all requests to the data layer.


    public class SimpleProductService : IProductService
    {
        private readonly IProductRepository _repository;

        public SimpleProductService(IProductRepository repository)
        {
            _repository = repository;
        }

        public Product Get(int id)
        {
            return _repository.Get(id);
        }

        public void Add(Product product)
        {
            _repository.Add(product);
        }

        public Product[] All()
        {
            return _repository.All();
        }
    }

Not very exciting. So let’s add a cache implementation that lets me fetch, store or remove an item. I’m assuming that we might one day want to control how long an item gets stored in the cache – my CacheLifetimeType enum will be used to determine whether to store an item for a configurable number of minutes, hours or days.


    public interface ICache
    {
        void Set<T>(string key, T value);

        void Set<T>(string key, T value, CacheLifetimeType lifetimeType);

        T Get<T>(string key);

        bool Exists(string key);

        void Clear(string key);
    }

The cache is just a simple lookup, and the simplest implementation is just an in-memory dictionary (great for unit testing, not much value in production). Be careful with primitive types – the types stored have to be nullable, so you can distinguish between “not found in the cache” and “default value cached” (I learned this the hard way with some boolean configuration values).

My basic in-memory cache implementation serializes the data into binary form (you could also use Json), and requires that any type stored in the cache is marked as “Serializable”. For this example, that would require decorating the Product class with the “Serializable” attribute. I have also used the StackExchange.Redis client.

Note: I typically extend any cache implementation to ensure I had useful admin and diagnostic methods. Examples include showing the general “status”, a count or list of all entries, and the ability to wipe the cache while testing/debugging (for entries matching a key pattern, or “everything”).

Caching Fetched Data for Next Time

I’m going to replace my SimpleProductService with something that checks if an item is in the cache before going off to the database.


    public class CachedProductService : IProductService
    {
        private readonly IProductRepository _repository;
        private readonly ICache _cache;

        public CachedProductService(IProductRepository repository, ICache cache)
        {
            _repository = repository;
            _cache = cache;
        }

        public Product Get(int id)
        {
            string cacheKey = id.ToString();

            //Try fetch from cache
            var cachedValue = _cache.Get<Product>(cacheKey);
            if (cachedValue != null)
            {
                return cachedValue;
            }

            //Fallback to a slow data fetch, cache for next time
            Product result = _repository.Get(id);

            if (result != null)
            {
                _cache.Set(cacheKey, result, CacheLifetimeType.Default);
            }
            return result;
        }

        public void Add(Product product)
        {
            _repository.Add(product);
        }

        public Product[] All()
        {
            return _repository.All();
        }
    }

OK, that’s a lot of code we had to add to the Get() method. And that’s for one operation. What if we want to add caching to another operation? Let’s copy and paste!


        public Product[] All()
        {
            string cacheKey = "products";

            var cachedValue = _cache.Get<Product[]>(cacheKey);
            if (cachedValue != null)
            {
                return cachedValue;
            }

            Product[] result = _repository.All();

            if (result != null)
            {
                _cache.Set(cacheKey, result, CacheLifetimeType.Default);
            }
            return result;
        }

Obviously, we have a load of repeat copy-and-paste code for checking the cache (and feeding it for next time) that we’d like to have in a single place for all cached operations to use.

My initial thought was that you could encapsulate this into some kind of helper class, maybe use some generics. So I start writing some not-very-good code.


    //This is an example of how not to do things!
    public class CacheHelper<TReturn, TInput>
    {
        private readonly ICache _cache;

        public CacheHelper(ICache cache)
        {
            _cache = cache;
        }

        public TReturn Fetch(TInput request, string cacheKey)
        {
            if ((_cache != null) && !string.IsNullOrEmpty(cacheKey))
            {
                var cachedValue = _cache.Get<TReturn>(cacheKey);
                if (cachedValue != null)
                {
                    return cachedValue;
                }

                //Specific fetch operation in a generic method! Oops...
                TReturn result = ReallyFetchIt(request);
                _cache.Set(cacheKey, result, CacheLifetimeType.Default);

                return result;
            }
            //No caching
            return ReallyFetchIt(request);
        }
    }

…and that’s not going to work because my generic helper class needs access to a specific operation (the “ReallyFetchIt” method). Thankfully, there’s another way to solve the problem.

Wrapping Operations in Commands

So to make things easy, I’m going to take inspiration from the “Command” design pattern, and wrap all the code to actually fetch a value from a “cache or database” operation in a simple executable command. The plan here is to have a base class than can handle most of the repeat “boilerplate” code, and then have a specific implementation for each different operation.

I’m not even worried about macros and undos and all the other advantages of using the command pattern. Really, I just find that the “encapsulate request as object” in a “runnable command” code structure fits my needs.

I create an abstract FetchCommand class that can handle generic caching and fall through to specific database code if necessary, and then go with a really simple implementation for my FetchProductCommand. This first draft of a “fetch product” command doesn’t actually have everything it needs for caching.


    /// <summary>
    /// Generic base class of a command that calls an external operation with optional caching for performance
    /// </summary>
    /// <remarks>
    /// TReturn type MUST be nullable - for primitive/non-nullable types, cache can confuse default value with "no value found so return default".
    /// </remarks>
    public abstract class FetchCommand<TReturn, TInput>
    {
        private readonly ICache _cache;

        /// <summary>
        /// Recommended cache lifetime - override for short/long cache life.
        /// </summary>
        protected virtual CacheLifetimeType LifetimeType { get { return CacheLifetimeType.Default; } }

        protected FetchCommand()
        {
            _cache = null;
        }

        protected FetchCommand(ICache cache)
        {
            _cache = cache;
        }

        /// <summary>
        /// Get cache key - if not implemented or no key specified, command implementation will not cache fetched values
        /// </summary>
        /// <param name="request"></param>
        /// <returns></returns>
        protected virtual string GetCacheKey(TInput request)
        {
            return string.Empty;
        }

        /// <summary>
        /// Run the command, get data
        /// </summary>
        /// <param name="request">Input request</param>
        /// <param name="cacheOverride">Optional instruction to (if set true) override cache, fetch latest and update cache</param>
        /// <returns>Search result of specified return type</returns>
        public TReturn Run(TInput request, bool cacheOverride = false)
        {
            string cacheKey = GetCacheKey(request);
            if ((_cache != null) && !string.IsNullOrEmpty(cacheKey))
            {
                //Note: assume that if we're overriding cache we fetch, clear and re-cache
                //To avoid caching old values while we fetch new, don't clear/fetch/re-cache, instead fetch THEN update cache

                if (!cacheOverride)
                {
                    //Try fetch from cache
                    var cachedValue = _cache.Get<TReturn>(cacheKey);
                    if (cachedValue != null)
                    {
                        return cachedValue;
                    }
                }

                //If we get to here, either we override cached value OR result was never cached
                TReturn result = FetchCore(request);

                //May want to validate the result, and if necessary prevent caching
                if (IsValidResult(result))
                {
                    _cache.Set(cacheKey, result, LifetimeType);
                }

                return result;
            }
            //No caching
            return FetchCore(request);
        }

        protected abstract TReturn FetchCore(TInput request);


        /// <summary>
        /// Is the returned result valid? If not, do not cache!
        /// </summary>
        /// <param name="result"></param>
        /// <returns></returns>
        protected virtual bool IsValidResult(TReturn result)
        {
            if (result == null)
            {
                return false;
            }
            return true;
        }
    }

    public class FetchProductCommand : FetchCommand<Product, int>
    {
        private readonly IProductRepository _repository;

        public FetchProductCommandNoCache(IProductRepository repository)
        {
            _repository = repository;
        }

        protected override Product FetchCore(int request)
        {
            return _repository.Get(request);
        }
    }

Yes, it’s a lot of code, but the pay-off comes when you re-use this for several operations by implementing the base FetchCommand. Each command implementation only needs to worry about what happens if the cache yields no value and you actually have to query the current value from the database.

Note that I also added the option of a “cache override” to my FetchCommand to allow you to bypass any cached value and go straight to the source.

My ProductService code, where I actually use the command, becomes nice and simple again:


        public Product Get(int id)
        {
            return new FetchProductCommand(_repository).Run(id);
        }

Of course, we don’t actually have any caching yet, because our command doesn’t have access to a cache, and doesn’t yet supply a valid cache key. So let’s fix that.


    public class FetchProductCommand : FetchCommand<Product, int>
    {
        private readonly IProductRepository _repository;

        protected override string GetCacheKey(int request)
        {
            return request.ToString();
        }

        public FetchProductCommand(IProductRepository repository, ICache cache) : base(cache)
        {
            _repository = repository;
        }

        protected override Product FetchCore(int request)
        {
            return _repository.Get(request);
        }
    }

So now we have a more elegant solution that we can use for any operation. By wrapping the detail of the operation in a command and pushing most of the tedious cache interaction to the base class, we simply wrap what’s different about each operation in its own command. So we can fetch single products, or lists of products, or users, or anything really, without having to worry about how the cache works (or whether there even is a cache).

We still only need a single line to run the command for “fetch item from cache if possible, else go all the way to the database”.


        public Product Get(int id)
        {
            return new FetchProductCommand(_repository, _cache).Run(id);
        }

At this point, we could even dispense with our separate “service” layer and just have the controller run commands directly.

And if we wanted to get the absolute latest information, that “cache override” option I added is available to all commands:


            return new FetchProductCommand(_repository, _cache).Run(id, true);

One annoyance with this approach is that individual commands still need access to the cache and other services – it might be worth creating a “command factory” to dish out command instances pre-loaded with references to the cache, data repositories and other services.

One design decision I have made is to pass the specific product request into the “run” method, rather than making the command immutable. You can do either, it might depend on whether you want a factory dishing out a “generic” FetchProductCommand, or building a specific “get product number 27” command.

Cache Keys

Every item in the cache is a (key,value) pair, where the key represents a specific item. So far in this example, I’ve used the most basic key – an item id. But what if you were storing say products and users in the cache? There’s the risk of id clashes and all sorts of issues. So you could have a more complex key eg. “Product2”, or “UserJohnDoe”. Suddenly, the string-building necessary to generate cache keys became complex.

For this, we can build a little helper, and use some simple fluent syntax to build our own cache keys.


    public class CacheKeyGenerator
    {
        public const string Separator = "|";

        private IList<string> _keys = new List<string>();

        public string Key
        {
            get
            {
                return string.Join(Separator, _keys);
            }
        }

        public CacheKeyGenerator Add(string key)
        {
            if (!string.IsNullOrEmpty(key))
            {
                _keys.Add(key);
            }
            return this;
        }

        public CacheKeyGenerator Add(int? key)
        {
            if (key.HasValue)
            {
                _keys.Add(key.ToString());
            }
            return this;
        }

        //Extend to handle other types
    }

    //Example usage:
    var key = new CacheKeyGenerator().Add(request.Username).Add(request.Id).Key;

Our example cached “get product” command took a very simple request (an integer Id), a more complex operation might require a request class with multiple properties. It might also make sense to have a request class that generates its own cache key (so the command doesn’t even have to know the details of the data it’s requesting). This is especially useful if you need that key elsewhere eg. to purge the cache of a specific value.

If you’re not worried about your cache keys being human-readable, you could use some kind of hash or checksum approach instead of string-building to generate your cache keys.

Removing Cached Items

To prevent “stale” data, cached information should be purged from the cache. This usually happens automatically – old items disappear and are cached when you fetch them again. In the example I also added a “cache override” flag to allow for force-fetching the latest information.

Intervening to remove/refresh cached information can be tricky. Clearing a single item from the cache is easy. Clearing the entire cache is easy. Clearing selected items from the cache is hard, and your best bet is to clear any items where the key matches a pattern. This might lead you into having to know the details of how individual items/operations are cached, and really you want a caching strategy that hides such details.

The example shows caching the “get all products” method. In practice, this might not be the smart thing to do (especially as the “get all products” method might have a query filter, and then you might have many different variants of search results cached). Any time you have the same item of data stored multiple times in the cache (eg. from “get one” and “get all” operations) then you either risk stale/mismatched data, or have to deal with the pain of knowing which operations to purge from the cache when you update.

Alternative Approaches?

This probably isn’t the only way to deal with caching in an elegant fashion. To try and remove the need for creating a new class for each specific cached operation, I did investigate the possibility of just wrapping my cached operation in a “using” statement (with something like a generic IDisposable cache handler), but I ran into the same problems as my initial helper method – the generic before/after cache wrapping needs not only the key (for the cache check) but the value (to populate for next time), and there’s no knowing whether you’ll be running the specific fetch operation until the generic cache check has happened.

You can also use this “command” approach for your non-cached API and database queries if you can’t wrap the operation in a simple using statement. And if you do need to be able to chain actions together or handle undo actions, then wrapping your operations in a command might make sense.

Implementing Feature Toggles in a .Net MVC Application

This is part 2 of a set of posts on feature toggles (part 1 is here). These are some example some code snippets (C#) based on techniques I have used to implement feature toggle systems in .Net applications. This example assumes that Unity is used for dependency injection. A real-world implementation would likely include a database for feature storage (with admin screens) and caching of feature checks for performance.

All Known Features

First, you need a master list of known features:


public enum FeaturesEnum
{
    CustomiseUserWorkspace,
    CancelSubscription,
    NewAwesomeThing
    //etc
}

Feature-Checking Code

Now you’re going to want a central authority for the current feature toggle state – a FeaturesService, and some kind of FeatureStore (not shown, add your own implementation). The FeatureStore implementation could be a config file or a database (hence the “ToggleFeature” method).


    public interface IFeaturesService
    {
        bool IsEnabled(FeaturesEnum feature);

        void ToggleFeature(FeaturesEnum feature, bool isEnabled);
    }

    public class FeaturesService : IFeaturesService
    {
        private readonly IFeatureStore _featureStore;

        public FeaturesService(IFeatureStore featureStore)
        {
            _featureStore = featureStore;
        }

        public bool IsEnabled(FeaturesEnum feature)
        {
            var featureRecord = _featureStore.GetFeature(feature);
            return (featureRecord == null) ? false : featureRecord.IsEnabled;
        }

        public void ToggleFeature(FeaturesEnum feature, bool isEnabled)
        {
            _featureStore.SetFeature(feature, isEnabled);
        }
    }

    public class FeatureRecord
    {
        public FeaturesEnum FeatureType { get; set; }
        public bool IsEnabled { get; set; }
    }

 

That’s all you need for basic feature toggle infrastructure. Now you need to add code to check those feature toggles.

Checking Feature State

First, to check for a feature and branch the code (eg. in a controller):


    public class HomeController : Controller
    {
        private readonly IFeaturesService _featuresService;

        public HomeController(IFeaturesService featuresService)
        {
            _featuresService = featuresService;
        }

        public ActionResult Index()
        {
            if (_featuresService.IsEnabled(FeaturesEnum.NewAwesomeThing))
            {
                //Do the shiny new thing
            }
            else
            {
                //Do the boring old thing
            }

            return View();
        }
    }

If you want to toggle UI elements (or hide the button that launches your new feature), add some HTML helper code:


    public static class FeaturesHelper
    {
        private static IFeaturesService _featuresService = null;
        private static IFeaturesService FeaturesService
        {
            get
            {
                if (_featuresService == null)
                {
                    _featuresService = UnityConfig.Container.Resolve();
                }
                return _featuresService;
            }
        }

        public static bool IsFeatureEnabled(this HtmlHelper helper, FeaturesEnum feature)
        {
            return FeaturesService.IsEnabled(feature);
        }
    }

And make a single-line check:


@if (Html.IsFeatureEnabled(FeaturesEnum.NewAwesomeThing))
{
<p>New awesome markup</p>
}

And finally, putting security checks on the UI is never enough, so you might want some access control attributes:


    public class FeatureEnabledAttribute : AuthorizeAttribute
    {
        private readonly FeaturesEnum _feature;

        public FeatureEnabledAttribute(FeaturesEnum feature)
        {
            _feature = feature;
        }

        protected override bool AuthorizeCore(HttpContextBase httpContext)
        {
            var featuresService = UnityConfig.Container.Resolve();
            return featuresService.IsEnabled(_feature);
        }
    }

Now you have another one-line check you can use to block access to controllers or methods:


    [FeatureEnabled(FeaturesEnum.NewAwesomeThing)]
    public class AwesomeController : Controller

Notes

I favour using enums over strings for my known features, because then I don’t have to worry about run-time typos suddenly checking for a brand new unknown feature. It also makes finding and removing features easier. And depending on language used, it might even be easy to document the features using decoration attributes (eg. adding longer descriptions for an admin system).

Any “master record” of which features exist should live with the code. You’re going to need to provide this to any admin system you build (via reflection, API call, etc).

When it comes to unit testing, it’s probably worth your while to actually implement a “FakeFeatureService” that’s just an in-memory dictionary and use the standard ToggleFeature() methods, rather than going with a mocking framework. Also, if you go with enums for your list of known features, the list is going to change frequently. To prevent brittle tests, don’t use a specific feature when testing your feature toggle framework code, just write a little builder/helper to give you a random valid feature enum from the list of current features.

Notes on Implementing Feature Toggles

If you want to release software frequently, you need to be able to cut releases at almost any time, and your code always needs to be in a releasable state (so no “code freeze” while you test and assemble the quarterly release). This makes dealing with in-progress feature development an issue, assuming you can’t completely build and test a new feature in a matter of hours.

One solution is to rely heavily on code branches for development and release, “cherry picking” the feature-complete-and-tested branches. This can introduce a load of test headaches though, as you try and coordinate which of the many possible “release candidates” you have are viable, and prevents using a simple no-or-minimal-branching “trunk-based” or “github-flow” workflow.

Another solution for regularly releasing constantly-changing software is to use a system of “feature toggles” (AKA “feature flags”). The basic idea is that the new code (which you may or may not be comfortable releasing) is already part of the release, but might not actually be available to use. Basically, you leave both the old (if you’re reworking something) and the new code in the release until you’re happy that the new code is ready to go into production, at which point the new code is “baked in”. With some simple “is feature X switched on” logic, and an admin screen or config file, you can control whether or not the new feature is “on”. And as a bonus, you can easily test old versus new behaviour, or even hit the emergency kill-switch and revert to the old code after you’ve deployed to production.

You could just have a load of bespoke configuration settings and checks, but the aim is to decouple and leverage as much common code as possible, so that you can just make a simple standard check for “is this new feature enabled?” and not have to worry about the underlying feature-toggling implementation.

What follows is some notes and observations based on the experience of implementing a feature toggle system (for a large e-commerce monolith .Net MVC web app).

Where and When to Check Feature Toggles

If you were building an API, you could just create a new version of an API method and deprecate the old one. Feature toggles are a great solution for UIs and other customer-facing apps where you want to hide and control the “old versus new code” decision. You can use feature toggles to control flow in microservices or “back-end” code, but you probably don’t need to.

To implement a feature toggle, you’re going to have to add code that you need to remove later, and to keep things simple you want to make as few checks as possible for each feature. Generally, you want to check a feature and branch to use the new path at the earliest possible opportunity, and you want that check to be a one-liner.

Typical examples of checks you would make:

  1. UI controls – the “should I show this button?” check for whether to show new/old or new/no UI component
  2. Authorisation/security – the “is the user allowed to visit this page?” check to block access to unfinished code
  3. Code branching for new/old behaviour – branch at the earliest opportunity, typically one of the first checks a controller action method would make

A suggestion: work on the basis that features default to “off”. Seriously, the additional code and deployment configuration complexity needed for “default to on” is really not worth it. If you have to ship the code with the feature “on”, then re-think your design.

How to Configure / Control Feature Toggles

To get started, you could have a simple “features” configuration file that you ship with your application. This does make it difficult to turn features on or off, other than when you deploy. A database (with an optional admin screen) gives you more control. If you have a microservices / service-based architecture, you might want a separate “features” microservice, although managing a small set of features for each individual UI app is probably going to work better (the feature toggle is generally used to control the start of the user journey). If you’re building your own admin system, a single shared admin UI calling APIs on your platform of client apps and services probably makes sense.

The Life of a Feature Toggle

Over its short life, the process should be:

  1. Create feature toggle, develop the new code, test locally in “on” and “off” states
  2. Allow the new code to ship to Test and Production in incomplete state, leave the feature “off”
  3. When development is complete, ship code to Test and test the “on” and “off” states
  4. Ship to Production, turn on the toggle
  5. After a few days/weeks without problems, remove the toggle and the “old code path” from the code – the change will work its way through to Production

Creating and Removing Features

Features should be easy to create and remove. You should only need to “create” a feature in one place to use it (eg. a “master feature list” enum). If you go for a database approach, don’t require migrations (or worse, a manual developer process) to add database records – just have the database record automatically created when you first toggle a feature on, and assume that “no matching record” means the feature is “off” (if you’re using config files, assume that “no setting” means feature defaults to “off”).

If you store features in a database, but your code holds the master record of the features that exist, you’re going to end up with a load of orphaned records for the features you removed. If you really care, you can clean these up manually in the database, or just have your application check (eg. on startup) the master list of features against the DB records and purge any old features.

The simplest approach is actually to do nothing to create features and just use a convention of referring to features by name, but if you can reference a “master list” of known features, you guard against any nasty issues with typos later on, and it’s easier to find feature references when you come to remove them later.

Advanced Feature Management

For bonus points, if you have a lot of feature toggles to manage, you might want to build the following into your admin system and testing process:

  1. The ability to export/import a set of toggles, so you can replicate the live setup on development or test environments
  2. The ability to script setting toggle states for doing an automated test run
  3. The ability to compare the toggles on two different environments (eg. live vs test)
  4. An indication of when a feature was toggled on (so you know it’s safe to remove a toggle after a couple of weeks)
  5. An audit trail of who turned the feature on, and when

Performance

If your feature toggles are stored in a database (or features microservice), there’s a cost to each feature check, and every user action could check the same toggle multiple times on the same screen (UI, security, code-branching). Invest in caching, even if it means that some users might have to wait an hour for the feature to become available.

Kill Your Feature Toggles!

It’s cool to be able revert new features to their old state, but every feature toggle in your codebase has a maintenance cost. And if a feature is hard to remove, that might indicate a more fundamental problem (is it really a long-term configuration setting?). When you’re happy with a feature being “on”, then bake it in to the code.

My ultimate aim would be to add an “auto-lock” mechanism to any feature toggle system, automatically locking a feature “on” after it had been enabled in production for a set length of time.

Not Everything is a Feature Toggle – Real World Lessons

I helped build a bespoke feature toggling system into a system that was deployed in two countries (UK and US deployments), each of which had its own test and live environments (in addition to 20 local-developer environments). We often had 30-40 toggles in operation at once. The ability to import/export toggles was very useful, as was the ability to compare UK/US or test/live.

We did end up with a lot of features that were hard to kill, either because they were buried deep in the code (always check features as early in the user request/journey as possible), inter-dependent so you had to remove them in a set order, or because they had been built “for the US only” and were actually configuration settings. To help deal with the long list of features, I ended up adding description attributes to each feature in the “master list” enum with a standard “Jira card number and long description” comment, and using this to provide column-sorting options on the admin screen.

I used some of the same tricks when refactoring the feature toggle system in another application. Putting the “master list” of features in the code helped simplify an app that was initially built to use the database as the source of all feature toggle knowledge (migration-heavy feature toggle creation). The admin screen didn’t get any more complex when it switched to reading known features from an enum instead of a database, and the setup process replaced “create an enum and a migration” with “add an enum”.

Remember, every feature toggle you add should be removed and baked into the code a couple of weeks after it gets turned on in production. If it relates to a “for region X only” feature, it should still be safe to turn it on in all regions.

Feature Toggles and A/B Testing

A simple implementation of feature toggles is to only have “on” and “off” states, but you could if you wanted extend feature toggles to include A/B testing or “canary release” scenarios. Assuming you have a production environment with “A” and “B” deployment slots (one handles the lucky “canary” users), you could instead have three feature states – “off”, “on in B only”, “on for all”. Your feature toggle system becomes a little more complicated, but you don’t need to change a single line of code when checking features. Your code can still ask the question “is feature X enabled?”, and the underlying logic will determine whether you’re running on the A or B environment and whether the feature is “on” for you. You could even expand this approach to any number of “slots”, depending on complexity – a shared development environment might need such an approach to allow individual developers to independently toggle features.

 

 

Testing Your Unity Configuration when you Build or Deploy .Net Apps

Dependency injection is great, but it does come with a risk that errors with the container configuration don’t show up until runtime (eg. forgetting to specify which implementation should be used for a given interface). In an MVC app, this can mean broken controllers, fatal errors with service classes, and parts of your site/application being unavailable. So a safeguard against this is to check your IOC configuration, either at deployment time, or even better, at build time. Then you can fail the build or back out of a deployment if the various services or controllers can’t be resolved.

To solve this problem on a recent project, I wrote some custom code to do a “get me one of everything” test. So if I have a service requiring other services/repositories, or a controller that uses those services, I know about any missing Unity configuration entries. This post covers .Net, MVC and Unity, but the same approaches should work for other IOC setups.

I’ve listed a couple of different approaches to Unity configuration testing in the following code snippets – applying a simple strategy to a large production codebase threw up a few issues, and prompted some experimentation and refactoring.

Basic Approach

First, I try and resolve everything I can find registered in the Unity container. Then I check that I can get an instance of each controller. This should ensure that all the constructor-injection is tested. Any errors with Unity get collected up and reported (and hopefully other errors are left to a different test). If the checks return no errors we’re good, otherwise we need to go fix the configuration. There are limits to this approach – I’m still trying to figure out a way to deal with individual calls to directly resolve implementations from the container – but checking the listed container contents and all the controllers cuts down on a lot of potential problems.

Running a Unity Check

Once you have a basic Unity checker class setup, you have options for running it. I have an API controller run this, so I can poll it from a command-line app/script on deployment. Our build pipeline does a “deploy to a development environment and run some tests” following a successful CI build, so we know soon after a code commit whether we have a shippable build. Getting the tests running at the local/CI build stage (eg. via unit tests) is even better.

Example Code

The first thing I need is a simple app with Unity setup (your production codebase will no doubt require some tweaks), and some very basic/stupid test interfaces/implementations. And then I can add some Unity configuration. I just grabbed the Unity.Mvc Nuget package, used the out-of-the-box UnityConfig.cs, and edited it to set up a load of config entries eg.

container.RegisterType();

I then made sure I had service implementations and controllers constructed with dependency injection so they get whatever’s configured in Unity as the implementation, eg.

public StuffController(IDoStuffService doStuffService) { ... }

I start by building a basic Unity checking class like this:


public class UnityConfigurationChecker
{
    public static IList GetUnityErrors()
    {
        List errors = new List();
        var container = UnityConfig.GetConfiguredContainer();

        foreach (ContainerRegistration registration in container.Registrations)
        {
            try
            {
                Type registrationType = registration.RegisteredType;
                var instance = container.Resolve(registrationType);
            }
            catch (Exception e)
            {
                errors.Add(e.Message);
            }
        }

        //Note: going to add extra checks for controllers in here!

        return errors;
    }
}

I can then call it to get a list of (hopefully zero) errors with

var messages = UnityConfigurationChecker.GetUnityErrors();

This class isn’t intended to be hit by production code or user action, it’s just a helpful utility that tells us all the configuration errors we currently have. No messages is good, otherwise we need to investigate the issues (so I just catch all the exceptions and make a note of them).

Running Unity Tests

You could start with a basic controller/action that just runs the Unity checks and reports any errors to a view. Expand this with an API method and a script that runs during your build process to call the API and get a list of (hopefully zero) issues.

Now the fun begins as we start commenting-out Unity entries and watch those errors eg. “The current type, UnityChecker.Services.IDoStuffService, is an interface and cannot be constructed. Are you missing a type mapping?”

So far this only tests the registered container entries. It will only pick up issues with registered types that require other registered types.

Assume you have this setup of three services, only one of which requires constructor injection:

container.RegisterType();
container.RegisterType();
container.RegisterType();

public DoStuffService() { ... }
public DoThingsService() { ... }
public DoStuffAndThingsService(IDoStuffService doStuffService, IDoThingsService doThingsService) { ... }

If you forgot to register either the DoStuffService or DoThingsService, the DoStuffAndThingsService can’t be resolved.

Testing Unity With Unit Tests

So this gives you a Unity-checking utility that you can use on a running web application, but that means you have to wait until deployment time to check your config. Better than a fatal runtime exception days after deployment, but checking this during the CI process (or even before check-in) would be awesome. So let’s try building container-checking into a unit test.

Add a unit test project to the solution (I’m using MsTest, but the NUnit approach should be the same) and create a simple unit test class:


[TestClass]
public class UnityConfigurationCheckerTest
{
    [TestMethod]
    public void GetUnityErrors_ExpectZeroErrors()
    {
        var messages = UnityConfigurationChecker.GetUnityErrors();
        Assert.AreEqual(0, messages.Count);
    }
}

Verifying Controllers (Version 1)

So far, this example only checks the container registrations, not the controllers. We can amend our method with extra code that also checks the controllers:


    //Need reference to the main MVC web application assembly
    //May need to check for typeof Controller and ApiController
    //Note: Prevent false-positives of abstract controllers
    var assembly= typeof(UnityConfig).Assembly;
    var controllerTypes = assembly.GetTypes().Where(t => t.IsSubclassOf(typeof(Controller)));

    foreach (var controllerType in controllerTypes)
    {
        if (!controllerType.IsAbstract)
        {
            try
            {
                var controllerInstance = container.Resolve(controllerType);
            }
            catch (Exception e)
            {
                errors.Add(e.Message);
            }
        }
    }
    return errors;

Note that searching the correct assembly for controllers is vital. Amend as necessary if you want to use the Unity checker with multiple different web application assemblies.

If the controller constructor expects something that’s not registered in the container, we’ll now get an error about it. Actually, I did get an error regarding one of the controllers in the demo MVC solution I used as a starting point, so some care may need to be taken about what constitutes an actual error in your configuration…

Better Exception Checking

Treating all the exceptions you hit as “fatal” is unhelpful, we want to distinguish between container registration errors and other runtime issues (eg. a service fails because of some other database or configuration issue). Be mindful of the testing context you run the Unity checks in. This is especially important if you’re going to use container checking as a build-breaking condition.

An improvement is to catch “System.InvalidOperationException” and “Microsoft.Practices.Unity.ResolutionFailedException”, and then log the exception type and message. For controllers that throw ResolutionFailedException, if there’s an InnerException then use that instead.

As a test, you could make sure your container registers everything you need, then deliberately throw an exception in a service/controller constructor – differentiate between being unable to construct the controller (missing Unity configuration entry) and a subsequent runtime exception.

Deciding on the correct set of exceptions to catch / ignore is still a judgement call and might not catch every scenario of “IOC configuration vs other runtime error” problem you run into…

Verifying Controllers (Version 2)

At this point, it’s maybe worth re-thinking the controller checks. While we’re making sure we can get controller instances, we’re not too worried about whether we can construct an instance of a controller so much as whether the controller has access to everything it needs to be constructed. So what if, instead of exercising Unity by constructing an instance of the controller, we just check that everything injected into the controller’s constructor is available in Unity?

So my refactored UnityConfigurationChecker class now contains this:


public class UnityConfigurationChecker
{
    private static void ResolveImplementationFromIoc(IUnityContainer container, Type registrationType)
    {
        try
        {
            var instance = container.Resolve(registrationType);
        }
        catch (InvalidOperationException e)
        {
            throw e;
        }
        catch (ResolutionFailedException e)
        {
            throw e;
        }
        catch (Exception)
        {
            //Ignore
        }
    }

    public static IList GetUnityErrors()
    {
        List errors = new List();

        var container = UnityConfig.GetConfiguredContainer();


        foreach (ContainerRegistration registration in container.Registrations)
        {
            Type registrationType = registration.RegisteredType;
            try
            {
                ResolveImplementationFromIoc(container, registrationType);
            }
            catch (Exception e)
            {
                errors.Add(e.GetType() + " - " + e.Message);
            }
        }
        //Need reference to the main MVC web application assembly
        //May need to check for typeof Controller and ApiController
        var assembly = typeof(UnityConfig).Assembly;
        var controllerTypes = assembly.GetTypes().Where(t => t.IsSubclassOf(typeof(Controller)));

        foreach (var controllerType in controllerTypes)
        {
            if (!controllerType.IsAbstract)
            {
                var constructors = controllerType.GetConstructors();
                foreach (var constructor in constructors)
                {
                    System.Reflection.ParameterInfo[] parameters = constructor.GetParameters();
                    foreach (var parameter in parameters)
                    {
                        var parameterType = parameter.ParameterType;
                        if (parameterType.IsInterface)
                        {
                            try
                            {
                                ResolveImplementationFromIoc(container, parameterType);
                            }
                            catch (Exception e)
                            {
                                errors.Add("Registration error with controller " + controllerType.FullName + ". " + e.GetType() + " - " + e.Message);
                            }
                        }
                    }
                }
            }
        }

        return errors;
    }
}

And then a quick test – get the controller to deliberately fail eg. by throwing a new NotImplementedException in the constructor – and as long as the Unity configuration is fine, the Unity checks will pass, because you’re not actually constructing an instance of the controller. However, throw an exception in one of the concrete implementations you resolve from the container, and you will get an error.

Check again that this is all working by editing the UnityConfig.cs and hiding a few necessary registrations.

Gotchas and Other Issues

I encountered a few issues when applying this strategy to an existing codebase. The Unity configuration works fine in its natural web application context, but running in an unfamiliar unit testing context causes issues. Because you’re actually running the Unity registration and creating instances of those registered types during your tests, you may encounter errors in the setup before the test even runs and starts testing your newly-configured container.

Firstly, I had a few types that wouldn’t resolve themselves because they fell over at construction time looking for non-existent configuration settings. A few extra “appSettings” entries in the unit test project’s “app.config” file fixed that.

Next, I had our old friend the “Could not load file or assembly” exception. Because the unit test project is running the same code as your target application, it’s going to need to reference and load the same assemblies. Good luck debugging that one.

I was running the Unity configuration in application startup with a “UnityConfig.RegisterComponents()” call to run all my “RegisterType” calls. I also had the unit test run that as a setup step to mimic a clean application startup.

You can wrap a try-catch around your setup call, or just step-through in the debugger to find the problem registration, but it might take some effort to get your container to work correctly for your unit tests.

An Alternative Approach Using Reflection (Version 3)

There’s an alternate approach worth considering at this point. The aim of this exercise is not to actually resolve and construct any concrete instances (we won’t be using them). That’s just one way to test the container (it’s probably the best way). All we actually care about is one thing: if you have a service/controller with a concrete instance of another service injected at run-time, will it work? Or will the container say “I don’t know how to resolve that” and let the app fall over?

I already used reflection to inspect my controller constructor parameters, I can do the same trick for my service classes.

So here’s code for an alternative approach. I also took the opportunity to wrap any returned errors in a class that can easily show errors in the test runner just by overriding ToString().


public class UnityConfigurationChecker
{
    private static Type GetMappedRegisteredConcreteType(IUnityContainer container, Type registrationType)
    {
        if (container.IsRegistered(registrationType))
        {
            //Don't resolve, just ask the container what it might resolve to
            ContainerRegistration registration = container.Registrations.FirstOrDefault(r => r.RegisteredType == registrationType);
            if (registration != null)
            {
                return registration.MappedToType;
            }
        }
        throw new ArgumentException("Could not map to concrete type for " + registrationType);
    }

    private static bool InspectConcreteTypeEnsureConstructorParametersValid(IUnityContainer container, Type concreteType)
    {
        //Don't care about abstract controllers / services
        if (!concreteType.IsAbstract)
        {
            //Look for constructor parameters and resolve all injected interface implementations
            var constructors = concreteType.GetConstructors();
            foreach (var constructor in constructors)
            {
                System.Reflection.ParameterInfo[] parameters = constructor.GetParameters();
                foreach (var parameter in parameters)
                {
                    var parameterType = parameter.ParameterType;
                    if (parameterType.IsInterface)
                    {
                        try
                        {
                            GetMappedRegisteredConcreteType(container, parameterType);
                        }
                        catch (Exception e)
                        {
                            throw new ArgumentException("Registration error with type " + concreteType.FullName + ". " + e.GetType() + " - " + e.Message);
                        }
                    }
                }
            }
        }
        return true;
    }

    public static UnityConfigurationCheckSummary GetUnityErrorsContainerConfiguration()
    {
        UnityConfigurationCheckSummary summary = new UnityConfigurationCheckSummary();
        var container = UnityConfig.GetConfiguredContainer();

        foreach (ContainerRegistration registration in container.Registrations)
        {
            Type registrationType = registration.RegisteredType;
            var concreteType = registration.MappedToType;
            try
            {
                InspectConcreteTypeEnsureConstructorParametersValid(container, concreteType);
            }
            catch (Exception e)
            {
                summary.Errors.Add(e.Message);
            }
        }
        return summary;
    }

    public static UnityConfigurationCheckSummary GetUnityErrorsControllerConfiguration()
    {
        UnityConfigurationCheckSummary summary = new UnityConfigurationCheckSummary();
        var container = UnityConfig.GetConfiguredContainer();

        var assembly = typeof(UnityConfig).Assembly;
        var controllerTypes = assembly.GetTypes().Where(t => t.IsSubclassOf(typeof(Controller)));

        foreach (var controllerType in controllerTypes)
        {
            try
            {
                InspectConcreteTypeEnsureConstructorParametersValid(container, controllerType);
            }
            catch (Exception e)
            {
                summary.Errors.Add(e.Message);
            }
        }
        return summary;
    }
}


public class UnityConfigurationCheckSummary
{
    public IList Errors { get; set; }
    public int Count { get { return Errors.Count(); } }

    public UnityConfigurationCheckSummary()
    {
        Errors = new List();
    }

    public override string ToString()
    {
        if (Errors.Any())
        {
            return string.Join(", ", Errors);
        }
        return string.Empty;
    }
}

Here’s the unit test code, checking that we get no errors, but giving a nice summary if we do. You might need to add a “Setup” method to reset/recreate your container for each test.


[TestClass]
public class UnityConfigurationCheckerTest
{
    [TestMethod]
    public void GetUnityErrorsContainerConfiguration_ExpectZeroErrors()
    {
        var summary = UnityConfigurationChecker.GetUnityErrorsContainerConfiguration();
        Assert.AreEqual(string.Empty, summary.ToString());
        Assert.AreEqual(0, summary.Count);
    }

    [TestMethod]
    public void GetUnityErrorsControllerConfiguration_ExpectZeroErrors()
    {
        var summary = UnityConfigurationChecker.GetUnityErrorsControllerConfiguration();
        Assert.AreEqual(string.Empty, summary.ToString());
        Assert.AreEqual(0, summary.Count);
    }
}

Summary – Verifying Unity Configuration

With some simple code like this, every time you add a service or change a controller, your Unity configuration should be tested for correctness with a regular unit test run. And you can always put the tests behind an API method and poll after a deployment to be sure that your configuration is correct.

Versioning .Net Applications With Team City and Git/SVN/TFS

Some notes on setting up version numbering for .Net applications and having your build server manage everything (I’m using Team City). I’ll cover some differences between centralised (TFS, SVN) and decentralised (Git) version control. These notes are based on a couple of “single-branch, build-once-and-redeploy” continuous delivery pipelines I set up for projects in TFS and Git.

Why Version?

We’ve all seen software shipped from developer machines, hoped we shipped the code we actually tested, and had to troubleshoot live issues where all we know about the code is it’s “probably whatever Bob had on his laptop three weeks ago”. Want the easy life? Bake in a version number that:

  1. Confirms the code was built and released through an approved CI/CD process – the presence of a default “1.0.0.0” version indicates worrying “laptop” deployments *
  2. Confirms the production code matches the tested code (and hasn’t had any bonus unpredictability merged in)
  3. Tells you where to find the exact code revision in source control
  4. Tells you which build job created it
  5. Gives you a changing version number to display to users that says “new stuff”

Display that version somewhere in your code and have it available on your build server.

* OK, you could cheat and fake a version number, but the aim is to make things so easy that doing things properly is the only option you consider.

What Version Number Format Should I Use?

This one is up to you. .Net projects allow a basic four-segment numbering system.

A recommended format is: “<major version>.<minor version>.<build number>.<revision>”.

Major/minor components are typically your “marketing version”, but you might want to use them to signal build time (eg. calendar date or sprint number). They probably won’t change often, and updates may be manually controlled. You definitely want to have build and revision/changeset numbers automatically applied so you can trace where the code came from. If your source control doesn’t use sequential version numbers (eg. Git) then you need to make a few adjustments, typically by using the revision checksum/snapshot instead of a sequential code revision.

Displaying the Version Number In Your Application

Your application should display the version number somewhere – on a help/about dialog, footer, via an “info” API call, etc.

Within your code, each project has an “AssemblyInfo.cs” file under “properties”. This can contain versioning attributes for:

  • AssemblyVersion
  • AssemblyFileVersion
  • AssemblyInformationalVersion

Version and file-version components consist of four numbers (you might be able to squeeze letters into the file version, but you’ll get a warning). The informational version allows more freedom, so you could hide a checksum, like a Git revision (yes, I really did ship an application with version number “1.3.29.300bcd5309dacf3897fc41ba11dff56409b136db”).

These numbers are baked into the compiled code (inspect the properties of a DLL file).

Your default AssemblyInfo.cs entries look like this:

[assembly: AssemblyVersion(“1.0.0.0”)]
[assembly: AssemblyFileVersion(“1.0.0.0”)]

You can manually edit these and rebuild, but much better to get your build process to apply them automatically. To display, just use:

System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString()

For the AssemblyInformationalVersion, use

Attribute.GetCustomAttributes(Assembly.GetExecutingAssembly(), 
typeof(AssemblyInformationalVersionAttribute)) as AssemblyInformationalVersionAttribute[]

And then display the first item in the array.

If you’re using Git, the AssemblyInformationalVersion is the place to store your revision.

Build Pipelines for Continuous Delivery / Continuous Deployment

I’m assuming you have some sort of pipeline whereby code passes through an initial CI (build on code change) stage to Test (deployment) to Live (deployment). You can control the flow of this pipeline in a couple of different ways:

  • Single branch / “trunk-based” deployment – everyone commits from local workspace (or their own development branches) to a single main branch, and you build once (at the Continuous Integration stage) then redeploy the same package/artifact at subsequent Test/Live stages
  • Branch-based – you merge code from Dev to Test to Live branches, and rebuild at each stage before deploying

You can use either approach, but always “ship what you test”. Because centralised version control systems (eg. SVN and TFS) assign a new sequential revision number when you merge code to a different branch, it’s difficult to track code changes across branches, so I wouldn’t recommend using a branch-based approach for SVN or TFS unless your process involves full continuous deployment at every step and you trust to full automated test coverage. With Git’s decentralised model, you have a checksum built into to the revision, so you can tell whether the merge introduced any changes, and a branch-based continuous delivery approach becomes viable.

 

Versioning in Team City

Each build in Team City is assigned a build number. You edit this in the configuration’s General Settings panel, just enter a value for “Build number format”. Typically this is built from a number of properties, the default being the “%build.counter%” (Team City’s internal count of builds for that configuration, also editable on the General Settings panel).

While the build counter is managed by Team City and the revision tracks back to source control, other build numbers can be applied. So to manage “marketing” versions for your product:

  1. Edit Project->Parameters
  2. Create two “configuration” parameters – “Project.Major.version” and “Project.Minor.version”

You could set up parameters for an individual build if you want, but project-wide parameters make sense. These values are now available to use in components of other build numbers as “%Project.Major.version%” and “%Project.Minor.version%”.

Some clarification – there’s a few different build numbers you’ll have to manage here:

  • The Team City build counter – increments each time Team City runs a specific build configuration job eg. “build code for project X” (this is a single number)
  • The Team City build number – applied to each build run, displayed on Team City dashboard and build reports (this might be a four-digit version number)
  • The version number(s) baked into the code at build time

Team City makes setting version numbers really easy, using the “Assembly Info Patcher” build feature – use this for every build configuration that is actually building code (CI, or any branch-based “re-build and re-deploy” stage). Just edit a build configuration and go to Build Features->Assembly Info Patcher. Set the required “Assembly version”, “Assembly file version” and “Assembly informational version” values to edit the
corresponding attributes in the AssemblyInfo.cs files.

Build Number Format (TFS)

For the initial CI build configuration use a build number format of

%Project.Major.version%.%Project.Minor.Version%.%build.vcs.number%.%build.counter%

I just set this as the version and file version in the assembly info patcher dialog, and ignore the informational version.

If you’re taking the single-branch/build-once-and-redeploy approach, then for a chained/dependent build configuration (eg. Deployments to Test/Live) set the build number of subsequent builds in the chain to be

%dep.CIProjectName.build.number%

You can use this approach to assign the same build number to the same code built/deployed by different build configurations – I do this to track a release candidate from the initial CI stage (build and package) through the various Test and Live deployment pipeline stages.

Build Number Format (Git)

For Git, I set the build number format to

%Project.Major.version%.%Project.Minor.Version%.%build.counter%

This means I only get 3-digit version numbers displayed on Team City, and no direct reference to the code.

I then set the assembly info patch values to be (on Build Features->Assembly Info Patcher)

Version and file version:

%Project.Major.version%.%Project.Minor.Version%.%build.counter%.0

Informational version:

%Project.Major.version%.%Project.Minor.Version%.%build.counter%.%build.vcs.number%

Passing Team City Version to Build Scripts

A quick note: Team City sets the current build number (the full 4-digit one) in a “BUILD_NUMBER” environment variable. So you could have an MsBuild script in your project use this if you need it (eg. to name files, or append to release notes and emails).

Example MsBuild usage:

<Message Text="Build number=$(BUILD_NUMBER)"/>

 

Summary

So you can use your build server to generate meaningful version numbers, track the same code as it works through the various chained builds of your build pipeline from initial commit to production deployment, and bake the appropriate version into your code. With a version number available, you should have confidence that the production release is known and tested software, and not some mystery release you can’t recreate locally.

 

Choosing What Gets Deployed When Publishing A .Net Web Application

When you use MsBuild to package up a web application for publishing, you typically rely on the publishing profile, to manage either a “build and publish” or “build a package for later deployment” process (a .zip package is created either way). By default, everything in your project may be copied to the output and included in the deployment package .zip file, unless you make adjustments.

You already have some control over whether files get copied into the output and packaged up, using file properties in the solution explorer, as set by the “Build Action” (set to “none” rather than “content”) and “copy to output directory” properties. However, it may be easier to just use some MsBuild to avoid packaging several files (or whole folders) based on a wildcard pattern.

Example MsBuild syntax for generating the deployment package would be:

msbuild WebApp\WebApp.csproj /t:Build /p:VisualStudioVersion=11.0;DeployOnBuild=True;PublishProfile="DeploymentPackage.pubxml";ProfileTransformWebConfigEnabled=False;Configuration=Release;

 

Including Extra Files In The Package

I previously covered ways to include extra files that your project doesn’t know about:

https://ajmburns.wordpress.com/2016/04/10/including-additional-generated-files-when-publishing-a-net-web-application/

Removing Files From The Package

If you wanted your deployment to omit selected files or folders, you can get the publish profile to that as well. I have to do this to hide deployment parameter config files and selected custom scripts that are part of my solution, but should not be deployed.

To test, add the excludes to the main “PropertyGroup” element of your publish profile, then build the package and check the generated package .zip file.

Example – excluding a list of custom scripts:

<ExcludeFilesFromDeployment>
 compile-this.bat;compile-that.bat
</ExcludeFilesFromDeployment>

Example – excluding a wildcard list of configs:

<ExcludeFilesFromDeployment>
 parameters*.xml
</ExcludeFilesFromDeployment>

Example – excluding a folder:

<ExcludeFoldersFromDeployment>
 myConfigs
</ExcludeFoldersFromDeployment>

MsBuild Version Gotchas – Better Building with MsBuild scripts and Build Servers

I recently ran into issues when upgrading an old codebase from the original setup (Visual Studio 2013 and the MsBuild tools v4.0) to use Visual Studio 2015, building through Team City on a new machine that only had VS2015. The MsBuild scripts I had been using for building (mostly creating the deployment package) and actual deployment (calling MsDeploy) needed a few tweaks. Mostly, I had taken and adapted a boilerplate MsBuild script from an earlier project, left in some stupid hacks that worked well until they didn’t, and found my scripts needed a few tweaks to work with the current tools.

My build process was to use Team City to compile, package, and later deploy (via MsDeploy) a web app to Azure, using MsBuild scripts to manage the actual solution build/package/deploy specifics.

For the purposes of this post, Team City happens to be my build server of choice. If the detail of your build process is stored and versioned with the code (and not the build server configs) then switching build servers should be easy.

So this post is some note-to-future-self gotchas for the next time a tools upgrade breaks the build…

MsBuild.exe path and version

First issue was to check the location of the MsBuild.exe itself. As this wasn’t already in my path, I had been specifying it explicitly as a command prompt setup for locally debugging build scripts, using a “commandPrompt.bat” with the following line

@%comspec% /k "SET PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319;

So first issue is to fix the path and use the latest version – “C:\Program Files (x86)\MSBuild\14.0\Bin”.

Setting the local path to MsBuild isn’t strictly required – normally I’d build locally through Visual Studio, but I like being able to test the build scripts locally without having to push them to the build server, and wrapping the build specifics in an MsBuild script lets me do this. The important thing is that I want Visual Studio and Team City build to compile the code with the same MsBuild engine so there’s no surprises.

Whenever I run the MsBuild script eg. with a

msbuild WebPackage.msbuild /t:BuildPackage

I get the MsBuild engine version reported, but to help debug my scripts, I tend to put some kind of “Info” target in to report paths and settings, eg:

<Target Name="Info">
 <Message Text="MsBuildExtensionsPath=$(MsBuildExtensionsPath)" />
 <Message Text="MSBuildToolsVersion=$(MSBuildToolsVersion)"/>
 <Message Text="VisualStudioVersion=$(VisualStudioVersion)"/>
 <Message Text="VSToolsPath=$(VSToolsPath)"/>
</Target>

Now I can run:

msbuild WebPackage.msbuild /t:Info

If the latest MsBuild is on the path, I should get latest version reported (v14.0).

Also, in my .csproj file I found references in a PropertyGroup to “VisualStudioVersion” and “VSToolsPath” settings.

<PropertyGroup>
 <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">10.0</VisualStudioVersion>
 <VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v(VisualStudioVersion)</VSToolsPath>
</PropertyGroup>

Your build scripts shouldn’t really be tying themselves to specific tools versions, better to set these values and pass them in to any running MsBuild process.

MsBuild Scripts Calling MsBuild

Noticed another problem – I had my MsBuild script calling the MsBuild.exe directly:

<Exec Command="$(MSBuildExe) $(SolutionName).sln /t:clean"/>

…and because I couldn’t guarantee MsBuild.exe in the path, I cheated and had my MsBuild script setting the MsBuild.exe path to a default if it wasn’t overridden:

<MSBuildExe Condition=" '$(MSBuildExe)'=='' ">C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe</MSBuildExe>

So checking the output, I was using the old MsBuild engine version. Oops. I could probably have allowed the .exe location to be specified either on the path, as an argument to the script, or as a default in the script, but that’s a hassle. Luckily, to free myself from nasty tools path dependencies, I can rewrite my MsBuild calls as:

<MSBuild Projects="$(SolutionName).sln" Targets="clean"/>

The general format is:

<MSBuild Projects="filename" Targets="target" Properties="prop1=val1;prop2=val2"/>

This should mean that whatever MsBuild engine you use to kick off the script is the one that gets used throughout. I have had issues with a few quotes around property values (solution: don’t quote property values), but it should be possible to express all MsBuild calls using the MsBuild task.

Importing MsBuild Tasks

Ok, for some reason, probably relating to some hacky web.config transforming I had done on previous projects (I have done terrible things to get applications to deploy and workaround the “configure-build-deploy” workflow of web.configs), I had the following using statement in my MsBuild script:

<UsingTask TaskName="TransformXml" AssemblyFile="C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll"/>

This was actually referencing non-existent DLL

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll

Which should be

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.Publishing.Tasks.dll

There’s another issue here – this will almost certainly require Visual Studio installed on the build server, or a cut down install eg. http://www.nuget.org/packages/MSBuild.Microsoft.VisualStudio.Web.targets/

Turns out, I actually no longer needed to use this task for building this project and I can just omit this line. But there will be times when your build script needs access to those extra tasks. If you can, try to avoid your build script knowing exactly where on the build server to look for stuff.

Team City Settings

So finally, I had to go into my Team City build configuration for all my build steps that used MsBuild and upgrade the MsBuild tools versions to latest. And check you’re using the same version throughout the build (I ran into problems compiling and packaging the same web app with different MsBuild versions in different Team City build steps).

Summary

Things to watch for:

  • Wrong MsBuild version on path – to debug, create an “Info” target in your build script and echo property values to console / build server log
  • Specifying the MsBuild version in your script – use the “MsBuild” task instead of direct MsBuild.exe calls to prevent specifying the .exe location on the path or in your script
  • References to a specific Visual Studio / tools version that may not be installed – eg. common MsBuild targets
  • Copying-and-pasting a previous build script and not cleaning out the unused stuff when you find a better way of doing things

Also – limit the use of MsBuild, try keep it simple, use it for building, packaging, including/excluding/copying files to be deployed. Handle deployment and other complex tasks through Powershell.

Including Additional / Generated Files When Publishing a .Net Web Application

When you publish a .Net web app, you typically set up a publish profile, either to do an immediate build-and-publish, or to publish to a package for later deployment (eg. single-trunk / build-once-deploy-many-times scenarios).

So for building a package, the MsBuild syntax would look like:

msbuild WebApp\WebApp.csproj /t:Build /p:VisualStudioVersion=11.0;DeployOnBuild=True;PublishProfile="DeploymentPackage.pubxml";ProfileTransformWebConfigEnabled=False;Configuration=Release;

You just use MsBuild and ask your project to build itself. The good bit is that your project knows what it contains, and therefore what needs deployed. The bad bit is that your project knows what it contains, and therefore what needs deployed… Any files generated by an external tool, or any other files you didn’t explicitly tell Visual Studio (listed as content in the .csproj file) are excluded. Thankfully, you can include extra stuff into your deployment package with a few lines of MsBuild commands. While you can probably hack the .csproj, the easiest way I’ve found is just to plug some MsBuild directly into the publish profile.

This location does make sense – you’re specifying what gets published after all. Note that if you have several publish profiles (maybe you have branch-based deployment and per-environment build-and-publish) then things get more complicated. I’m going to stick with the simple case – a single publish profile for creating a generic deployment package that we can then deploy (with appropriate configuration) to any target environment.

I recently had to solve this problem for handling some generated CSS, but you can use it for any non-solution files you want to include as part of the web application deployment package zip.

A quick warning – this does involve editing a generated file (the .pubxml file that you probably only create once).

Tweaking The Publish Profile

So assume I have some tools generating CSS (in “/webproj/content/styles”) and extra text files (in “/webproj/content/documentation”). I don’t even need to set up tools to generate these files – I can actually just simulate by manually adding some dummy files. I’m assuming that the files get generated within the project directory (because you want to deploy them to IIS or Azure when you deploy your site).

Anyway, create the extra files within the project directory, don’t tell Visual Studio or your project about them, and then build the deployment package. You’ll see they get omitted. We can fix that.

Go edit your publish profile, and add these lines inside the “PropertyGroup” element – we need to add a “CopyAllFilesToSingleFolderForMsdeployDependsOn” property.

 <CopyAllFilesToSingleFolderForMsdeployDependsOn>
 IncludeCustomFilesInPackage;
 $(CopyAllFilesToSingleFolderForMsdeployDependsOn);
 </CopyAllFilesToSingleFolderForMsdeployDependsOn>
 </PropertyGroup>

This goes off and calls a target called “IncludeCustomFilesInPackage” that we’ll create in a minute. The name of the “CopyAllFilesToSingleFolderForMsdeployDependsOn” property is important (MsBuild will look for it), the name of the custom “IncludeCustomFilesInPackage” target can be whatever we want (change the names and see what happens).

There is also a “CopyAllFilesToSingleFolderForPackageDependsOn” property, but this seems to get ignored even when building in package mode. For completeness, it would look like this:

<CopyAllFilesToSingleFolderForPackageDependsOn>
 IncludeCustomFilesInPackage;
 $(CopyAllFilesToSingleFolderForPackageDependsOn);
</CopyAllFilesToSingleFolderForPackageDependsOn>

But you don’t seem to need it – just add the “CopyAllFilesToSingleFolderForMsdeployDependsOn”.

Now we can go define our “IncludeCustomFilesInPackage” target, so add this to the .pubxml file, inside the “Project” element:

<Target Name="IncludeCustomFilesInPackage">
 <Message Text="Collecting custom files..."/>
 <ItemGroup>
  <CustomFiles Include="Content\styles\**\*" />
  <CustomFiles Include="Content\documentation\**\*" />
  <FilesForPackagingFromProject Include="%(CustomFiles.Identity)">
   <DestinationRelativePath>%(CustomFiles.Identity)</DestinationRelativePath>
  </FilesForPackagingFromProject>
 </ItemGroup>
 <Message Text="Add to package %(FilesForPackagingFromProject.Identity)"/>
</Target>

Add as many entries as you need into the ItemGroup. I added a couple of “message” calls to output progress.

So when you run your package build and inspect the final .zip package that gets created (check in the “obj” directory eg. “projectName/obj/projectName/projectName.zip”) then it should contain all those extra files that your .csproj didn’t know about.

Troubleshooting

When you’re setting this up, you might run into problems getting the paths correct. You can always add in a load of Message calls, and redirect the output of your MsBuild run to a text file.

You can add the following in your custom target for debugging purposes:

<ItemGroup>
 <GeneratedIncludeFiles Include="Content\documentation\**\*" />
</ItemGroup>
<Message Text="Generated files to include: %(GeneratedIncludeFiles.Identity)"/>

A note about “DestinationRelativePath” – I have seen this specified as “%(RecursiveDir)%(Filename)%(Extension)” instead of using the “CustomFiles” item group, but I had trouble getting this to actually include the custom files.