When it comes to running automated tests of my public Puppet code TravisCI has long been my favourite solution. It’s essentially a zero infrastructure, second pair of eyes, on all my changes. It also doesn’t have any of my local environment oddities and so provides a more realistic view of how my changes will impact users. I’ve had two Puppet testing scenarios pop up recently that were actually the same technical issue once you start exploring them, running tests against the Puppet version I use and support, and others I’m not so worried about. Read on →

When it comes to little known rubygems that help with my testing I’m a massive fan of the relatively unknown Timecop. It’s a well written, highly focused, gem that lets you control and manipulate the date and time returned by a number of ruby methods. In specs where testing requires certainty of ‘now’ it’s become my favoured first stop. The puppet deprecate function is a good example of when I’ve needed this functionality. Read on →

As part of refreshing my old puppet modules I’ve started to convert some of my Puppet templates from the older ERB format to the newer, and hopefully safer, Embedded Puppet (EPP). While it’s been a simple conversion in most cases, I did quickly find myself lacking the ability to select a template based on a hierarchy of facts, which I’ve previously used multitemplate to address. So I wrote a Puppet 4 version of multitemplate that wraps the native EPP function, adds matching lookup logic and then imaginatively called it multi_epp. Read on →

One of the things you’ll often read in web operation books is the idea that while you’re experiencing downtime your customers are fleeing in droves and taking their orders to your competitors out of frustration. However this isn’t always the truism that people take it for. If your outages are rare, and your site is normally performant and easy to use (or has a monopoly), you’ll find this behaviour a lot less common than you’ve been told. Read on →

One of the hidden gems of GitHub is Jess Frazelle’s Dockerfiles Repo, a collection of Dockerfiles for applications she runs in containers to keep her desktop clean and minimal. While reading the NMap Dockerfile I noticed a little bit of shell I’d not seen before. I’ve included the file itself below. The line in question is && rm -rf /var/lib/apt/lists/*, a tiny bit of shell that does some additional cleanup once apt has installed the required packages. Read on →

While migrating and upgrading an old install of Jenkins over to version 2 the topic of adding some new views came up in conversation and the quite shiny Jenkins CI Build Monitor Plugin came up as a pretty, and quick to deploy, option. Using some canned test jobs we did a manual deploy of the plugin, configured a view on our testing machine, and I have to say it looks as good, and as easily readable from a few desks away, as we’d hoped. Read on →

A number of roles ago the operations and developer folk were blessed with a relatively inexperienced quality assurance department that were, to put it kindly, focused on manual exploratory testing. They were, to a person, incapable of writing any kind of automated testing, running third party tools or doing anything in a reproducible way. While we’ve all worked with people lacking in certain skills what makes this story one of my favourites is that none of us knew they couldn’t handle the work. Read on →

One of the difficulties in technically mentoring juniors you don’t see on a near daily basis is ensuring the right level of challenge and learning. It’s surprisingly easy for someone to get blocked on a project or keep themselves too deep in their comfort zone and essentially halt their progress for extended periods of time. An approach I use to help avoid this stagnation is the keeping of a “Development Diary”. Read on →

One of my favourite forthcoming Terraform 0.8 features is the ability to restrict the versions of terraform a configuration file can be run by. Terraform is a rapidly moving project that constantly introduces new functionality and providers and unless you’re careful and read the change logs, and ensure everyone is running the same minor version (or you run terraform from a central point like Jenkins), you can easily find yourself getting large screens of errors from using a resource that’s in terraform master but not the version you’re running locally. Read on →

In a large Puppet code base you’ll eventually end up with a scattering of time based ‘magic numbers’ such as cache expiry numbers, zone file ttls and recurring job schedules. You’ll typically find these dealt with in one of a few ways. The easiest is to ignore it and leave a hopefully guessable literal value (such as 3600). The other path often taken is the dreaded heavily linked and often missed comments that start off as 86400 # seconds in a day and over time become 3600 # seconds in a day. Read on →