Bad developers plagiarise, Good developers steal

I’ve been reading the conversation that Scott Hanselman kicked off about Good developers vs Google developers and I have to say that my own feelings are pretty much summed up by those of Rick Strahl.

Except for one point.

Like many of these issues there is no black and white, but there is a lot of grey. Rick et al., have pointed out the advantages of using a reference library, such as the meta-library provided to us by the internet, but my point is not only that its desirable, but actually necessary to be a good developer.
Without the reference to see many examples, how can one determine what makes code better or us?
Without somebody proposing good design, how do we identify bad?

In short how so we avoid being an isolationist set of teams, with many local maxims, instead of a cohesive profession where we can all benefit.

And with that though in mind I’m really glad I went to tonight’s LDNUG.

Using SpecFlow isn’t necessarily a good practice

There’s been a batch of questions on StackOverflow recently on the SpecFlow questions feed that follow a similar pattern.

All these questions have one thing in common, they are definitely not using BDD. I’m not sure what they are doing, in one case, SpecFlow is used to set up a test run and gather performance data, but there is never a comparison or an assertion. I’d struggle to even call it testing as the test must be performed manually by looking at the data.

But all this is a micro level abstraction of a bigger problem, which it looks like Liz Keogh is feeling at the moment. Her latest post Behaviour Driven Development – Shallow and Deep definitely indicates that this phenomenon is not just isolated to technical individuals trying out new ways of doing things and getting it wrong, but that those habits seem to then pass on to the organisations that they work for.

I love the way that Liz talks about this problem in the context of having a conversation with somebody else. In fact I might just go for a little chat myself.  

The Deployment maturity model

A long time ago I came across the Personal Threading Maturity Model and I still keep referring back to it as a measure of how much further I have to go. Today however, it struck me that I probably could do with some other yardsticks to show how far off nirvana some of things I’m doing really are. With that in mind I present the Deployment Maturity Model, my take on how things often work, how I can see them working if I really push, and how things work in organisations that get it.

Unaware

Uses manual deployment strategy such as xcopy/ad-hoc SQL to change what they want where they want. Usually the production deployment is a bespoke process that is never practiced in advance in other environments. This unfortunately makes them time consuming and error prone.

Authorisation/Sign off and security are totally incidental. Sometimes they are considered, but only the personal high standards of good developer practices prevents them from being abused.

Post-release testing is achieved by manually using the production system and confirming the new functionality. Rollback is achieved by taking a backup copy at the start of the release, and/or providing SQL rollback scripts.

Casual

Introduction of packages, such as MSIs, that ensures that some components of a deployment can be upgraded as a single unit, such as application + shared dlls. However for multi-tier applications the other tiers are commonly tightly coupled with these changes, e.g. database updates, so they require downtime and synchronization between tiers. Packages can be deployed in Dev, test and production, but often require additional manual interaction such as switching back end tier or service connections and changes to database.

Authorisation to release is always considered at this level with external RFC/Change management systems being the norm, but security/the ability to release to production is still often in the hands of the developer. Post-release testing is still manual. Rollback can now be achieved in part by redeploying the previous version packages, although a common problem is that ad-hoc changes have been made to configuration files, which need to be re-applied or manually backed up before the release occurs.

Rigid

Begin use of deployment management software. Software ensures that all components of a release are released at the same time. Releases happen much faster and more reliably as process is automated and consistent process is used across development, test and production environments. Configuration should be a product of the deployment process, so values are injected during release to associate tiers/services together. All ad-hoc production changes should be banned.

Security is now locked down, with no write-access to production for developers being the norm. Tools however are still not joined up between tracking the changes included e.g. Jira, gathering authorisation e.g. Remedy and performing the release e.g. OctopusDeploy. Down time is reduced as releases can be staggered within tiers. Strategies such as load balancing used to keep application constantly available while individual services are rotated out of service for upgrade.

Rollback is by re-releasing the previous version. Testing should also change to use automated test packs that can be run as soon as new environments are available primarily in test environments, but also with shakedown packs that can be run in production post release.

Flexible

Additional opportunities to utilise infrastructure are employed due to the ease at which deployments can now be achieved. Opportunities to create multiple instances of a tier or service are used, so that current and previous versions are always available, e.g. direct certain clients to new front end, or point certain servers to new back end services.

Applications and services are written to support switch on/off new features for subsets of usage e.g. per client, or redirect to alternate service instances, i.e. switch on of new version is runtime application setting change, not a deployment.

Deployment tools now starts to link up, and orchestration of how/when to perform the release becoming an additional feature. Some systems may even allow you to define the future state of your environment, and let the software work out which pieces need releasing.

Release testing can now be performed in production with pilot subsets in advance of migrating true production processing on to it, and releases then staggered by moving subsets over at a time. Rollback is now trivial by using instance of service that is still on previous version and which remains live until later decommissioning.

Optimising

Support for change of services and tiers embraced across entire architecture. Infrastructure fully utilised to support multiple versions of service instances and ensure balanced load. Since release process is proven and trivial, continuous deployment should be used to automatically deploy every candidate release that meets quality gates to production so that change is incremental. Load-balancing used heavily so that release downtime is zero, and impact from release, e.g. load performance, is negligible.

Deployment tools need to be completely joined up or face irrelevance, e.g Authorisation to release simply becomes one of the quality gates leading to automated release, and with a good testing framework, could be automatically given if the criteria for acceptance can be defined and verified electronically.

Rollback happens automatically should issues be detected. Roll forwards is as trivial as it can be.

Some links

If you read only one link, start with this one http://timothyfitz.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/

Using NuGet 2.5 to deliver unmanaged dlls

A little while ago I looked at a way to deliver unmamaged dlls via NuGet 2.0. Well now there’s NuGet 2.5 and they’ve only gone and fixed all the problems.

The two problems unsolved from before were

  • How to package the set of dlls
  • How to inject our new build target into msbuild so that it copies over our dlls

Introducing native dlls

The first new feature in 2.5 is Supporting Native projects. Its quite simple, underneath you lib folder you can now add a new type native. The first thing you will notice when you try this out, is that you can’t just ship a set of lib\native files, you need something else.

Could not install package ‘MyPackage 1.0.1’. You are trying to install this package into a project that targets ‘.NETFramework,Version=v4.5’, but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

You can simply add a content folder with a readMe.txt or similar, to get around this issue.

Installing a package built like this reveals that although the files end up copied into the packages\myPackage.vx.x.x\lib\native folder, nothing happens to them when you run a build.

Automatically including .props and .targets

The 2nd new feature is Automatic import of msbuild targets and props files. In short if you add a .props file in the Build folder of your package, it gets added to the start your .csproj, and if you add a .targets it gets added to the end. This is where I refer back to my earlier post and we can now copy over that little script.

 

<?xml version="1.0" encoding="utf-8"?> 
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
  <Target Name="AfterBuild"> 
    <ItemGroup> 
      <MyPackageSourceFiles Include="$(MSBuildProjectDirectory)\..\Packages\FakePackage\*.*"/> 
    </ItemGroup> 
    <Copy SourceFiles="@(MyPackageSourceFiles)" 
DestinationFolder="$(OutputPath)" > 
    </Copy> 
  </Target> 
</Project>

Using NuGet to supply non managed dlls

NuGet is great for supplying managed dlls, but refuses to allow you to add references to unmanaged. Yet it would still be the perfect format for shipping them.

There are already some NuGet packages out there that do clever things, one of my favourites is OctoPack from OctopusDeploy. Instead of just adding some dlls, this NuGet package modifies your .csproj so that if you are doing a release build then it will call extra commands to package up your build artefacts into a deployable package. Now this gives me an idea.

First, the test

First of all we’ll start with a very simple program, which will tell is if our wonderful external dll has been made available at run time or not.

class Program
    {
        static void Main(string[] args)
        {
            const string FakeFile = "fake.externalDll";

            var exists = File.Exists(FakeFile);
            var output = string.Format("{0} {1} found in {2}", FakeFile, exists ? "is" : "isn't", Environment.CurrentDirectory);
            Console.WriteLine(output);
            Console.ReadKey();
        }
    }

Basically if we F5 the solution, we will now get told if we have successfully hacked something together.

Iteration 1

Now to start with, I’m not going to have a finished product. So I’m going to start by manually changing my .csproj. Later iterations will script this.

If you’ve ever unloaded and edited a .csproj then you’ve probably already seen the commented out section at the bottom,

<!-- To modify your build process, add your task inside one of the targets below and uncomment it. 
       Other similar extension points exist, see Microsoft.Common.targets.
  <Target Name="BeforeBuild">
  </Target>
  <Target Name="AfterBuild">
  </Target>-->

After a little judicious digging into MSBuild we find that this is a placeholder for some <Task> definitions so we can add the equivalen tof a post build step

<Target Name="AfterBuild">
    <ItemGroup>
      <MyPackageSourceFiles Include="$(MSBuildProjectDirectory)\..\Packages\FakePackage\*.*"/>
    </ItemGroup>    
    <Copy SourceFiles="@(MyPackageSourceFiles)"
          DestinationFolder="$(OutputPath)" >
    </Copy>
  </Target>

And the resulting output

fake.externalDll is found in c:\users\al\documents\visual studio 2012\Projects\DllReferencingDemo\DllReferencingDemo\bin\Debug

Reactive UI: Doing It Better

I thought I had the hang of ReactiveUI, but when I got told I was Doing It WrongTM I took the opportunity to reconsider how I might start Doing It Better.

Never write anything in the setter in ReactiveUI other than RaiseAndSetIfChanged – if you are, you’re definitely Doing It Wrong™Paul Betts

With that in mind I wondered how how I might acheive that. Firstly its going to make all my properties much simpler, in fact my initial thought was how to wire up a ViewModel property back to its model property if I can’t use the property. In the past I would have previously used

// This is wrong
public bool IsSet 
{
    get { return _model.IsSet; }
    set 
    { 
        this.RaisePropertyChanging(x=>x.IsSet);
        _model.IsSet = value;
        this.RaisePropertyChanged(x=>x.IsSet; 
    }
}

However that’s not very observable. So if instead we stick with the idea of making the property only do the notification, then we need something to update the model property on the view model change. Well that sounds just like an observable to me.

public bool IsSet 
{ 
    get { return _IsSet; } 
    set { this.RaiseAndSetIfChanged(x => x.IsSet, value); } 
}

and in the constructor

this.ObservableForProperty(x=>x.IsSet).Subscribe(x=>_model.IsSet);

Now we have a building block that enables us to build up the complicated functionality that we need.

7 years of good development in 5 minutes

First, here’s the idea of TDD

http://www.jamesshore.com/Blog/Red-Green-Refactor.html

 

Now here’s the concept of strict TDD, this example uses NUnit which is “A Good Thing”.

http://gojko.net/2009/02/27/thought-provoking-tdd-exercise-at-the-software-craftsmanship-conference/

 

Now you need something to try it out on, so here’s a kata

http://www.butunclebob.com/ArticleS.UncleBob.TheBowlingGameKata (you want to click on the word “Here” to get the slides)

 

So here’s an exercise,

1.       spend 20 minutes using the kata to write some code – literally set a timer. Don’t worry if you don’t get very far

2.       Delete your code

3.       Repeat every day for a week, compare your progress. (If you are keen repeat more often, but it needs a whole week to distil)

 

Watch this presentation

http://gojko.net/2011/02/04/tdd-breaking-the-mould/

 

Still happy with what you did?

 

 

 

Now the point of the kata is that it removed the business analysis element from your coding to turn it into an exercise, which isn’t very real world so…

 

Introducing BDD

http://dannorth.net/introducing-bdd/

 

Download SpecFlow http://www.specflow.org/ 

 

Try the kata again, but this time in BDD, again try it out for a week or so…

 

 

 

Remember this slide from the presentation, well the outer circle implies Business level tests (think SpecFlow), the inner circle implies technical level tests (think something simpler maybe NUnit). Again try it out for a while

 

 

And finally, talk to me about your experiences…