A while back I wanted to setup database migrations on a .NET project I was working on. I had previously been using Roundhouse but I have to be honest, I didn’t like it.
Too much ‘Powershell-foo’ and a reliance on the way you named your scripts, plus it didn’t work flawlessly with a group of developers and source control. And don’t even dare forget to mark your script as ‘Build Action – Content’, because the whole walls of Jericho come tumbling down if you don’t.
I wanted a replacement that worked for me. After a bit of research I came across FluentMigrator.net. At first I couldn’t grok it. I felt a bit like some of the stuff I’d seen in Ruby Migrations demos. I’d also used Subsonic (also has migrations), but there were a were niggly questions I had, namely:
How to work with an existing (mature) database?
How to deploy the migrations to production?
How to manage a dependency on .NetTiers (historical yuck)?
I posted the question on StackOverflow but it never got any love, nor any responses. Anyway, I managed to solve this problem, and this blog post documents my path through to the solution.
Most of my problems were to do with a lack of understanding of how FluentMigrator.net actually work. the Github page outlines what FluentMigrator is quite well:
Fluent Migrator is a migration framework for .NET much like Ruby Migrations. Migrations are a structured way to alter your database schema and are an alternative to creating lots of sql scripts that have to be run manually by every developer involved. Migrations solve the problem of evolving a database schema for multiple databases (for example, the developer’s local database, the test database and the production database). Database schema changes are described in classes written in C# that can be checked into version control.
The wiki is missing a short overview of how it works though. So I’ll have a stab at outlining it here:
FluentMigrator allows developers to create up and down migration scripts using a ‘fluent’ interface in C#, which is a language most C# developers are familiar with! Most basic SQL commands, such as those to create or update schema are supported. Example would be the creation or alteration of a table, or adding an index, or deleting a foreign key. It supports more complex schema and data changes through embedded or inline scripts. The NuGet package includes an executable called migrate.exe, which runs against your compiled assembly. It scans through your assembly for scripts to run, orders them by the migration id, checks which ones have already been run in that database (it looks at a table in the database to see which ones have already run) and then runs each migration in turn until that database is upgraded or downgraded to the correct version (as required). Migrate.exe takes a number of command line parameters, which allow you to set things like the database connection string and the assembly to run against.
What is the right way to import an existing database schema?
I couldn’t find a ‘right way’, but I did find a way that worked for me! I made the following decisions:
I setup my first ‘iteration’ as an empty database. The reasoning for this is that I can always migrate back down to nothing.
I scripted off the entire database as a baseline. I included all tables, procs, constraints, views, indexes, etc. I setup my first iteration as that baseline. I chose the CREATE option without DROP. This will be my migration up.
I ran the same script dump but choose DROP only. This will be my migration down.
The baseline migration just has to use the EmbeddedScript method to execute the attached script (I organised the scripts into iteration folders as well).
For each ‘sprint’ (Agile) I create a new iteration. It helps to keep track of which migrations are to be expected in the following or preceding releases.
Baseline database solved…
How to deal with .NetTiers
Ok, this was somewhat of a challenge. I created a specific .NetTiers database which I would use to run the .NetTiers code generation. In FluentMigrator you can ‘tag’ migrations. I decided to tag based on environments. Hence I have a ‘tiers’ tag as well as tags for ‘dev’, ‘test’, ‘uat’, ‘prod’, etc. How these get run will follow later.
When making schema changes I create the migration and use the tag ‘tiers’ to focus on the .NetTiers schema change. I then run migrate.exe out of Visual Studio external tools using that specific tag as a parameter. The app.config database connection that matches my machine name will be the database connection used, so I point it at the tiers database. Now my migrate up has run my .NetTiers source database is ready. I can now run the .NetTiers Codesmith code generation tool to produce the new DLLs.
Note: If you are using a build server such as TeamCity then you can simply check that migration code change in to your VCS and then the build trigger can automatically run the .NetTiers build (you need CodeSmith on the build server though).
You can then replace the current .NetTiers DLLs with the new ones either automtically if you have the build server generating them, or manually if you run Codesmith generator manually.
What is the right way to deploy migrations to a production environment?
I am using Octopus Deploy and to be perfectly honest, if you are deploying .NET applications, especially to multiple servers, this should be your absolute go-to-tool for doing so!
A program called Octopack that wraps up your application as a NuGet package.
A TeamCity plugin that makes TeamCity build the NuGet package and offer it as an artifact exposed on a NuGet feed.
Octopus Deploy then consumes that NuGet feed and can deploy those packages to the endpoint servers. Part of this deployment process is running a PreDeploy and PostDeploy Powershell script. In here is where I am going to run the migrate.exe application with my specific tags:
Notably, my $OctopusEnvironmentName match my tags. Therefore each environment deployment will match the correct database migration target. You can just run the database migrations project as a step in a OD project. You simply select the Database.Migrations project name (which is the name of my project) from the NuGet feed server.
Most of the data we work with has geographic longitude/latitude information so we have recently started exploring doing some work with geospatial queries using different database technologies (MySQL, Oracle). We currently useMongoDB as our primary data store and we were surprised to find out that MongoDB already has this functionality built in. After doing some reading we also realized that it handles each of our use cases. Even though we use MongoDB everyday, we just never thought a document store would come with this functionality. For more information go to MongoDB Geospatial Indexing.