March 20146
February 20141
January 20145
December 20135
November 20132
October 20134
August 20134
July 20132
June 20131
May 20131
April 20131
March 20131
Full Archives

Tue, 25 Mar 2014

Managing CloudFormation Stacks with Ansible

Constructing a large, multiple application, virtual datacenter with CloudFormation can quickly lead to a sprawl of different stacks. The desire to split things sensibly, delegate control of separate tiers and loosely couple as many components as possible can lead to a large number of stacks, lots of which need values from stacks created earlier in the run order. While it's possible to do this with the native AWS CloudFormation command line tools, or even some clever bash (or Cumulus), having a strong, higher level tool can make life a lot easier and reproducible. In this post I'll show one possible way to manage interrelated stacks using Ansible.

We won't be delving into the individual templates used in this example. If you're having this kind of issue with CloudFormation then you probably have more than enough of your own to use as examples. Instead, I'll show a basic Ansible playbook for managing three related stacks.

- hosts: localhost
  connection: local
  gather_facts: False
    stack_name: dswtest
    region: eu-west-1
    owner: dwilson
    ami_id: ami-n0tr34l
    keyname: key-64
    snsdest: test@example.org

The first part of our playbook should be familiar to most Ansible users. We set up where to run the playbook, how to connect and ensure we don't spend time gathering facts. We then define the variables that we'll be using as parameters to a number of stacks. The ability to specify literals in a single place was the first benefit I saw when converting a project to Ansible. This may not sound like a major win but being able to change the AMI ID in a single place, or even store it in an external file that our build system can automatically update, is something I'd find difficult to give up.

Now we'll move to the first of our Ansible tasks, a CloudFormation stack represented as a single Ansible resource. The underlying template creates a basic SNS resource we'll later use in all our auto-scaling groups.

  - name: Add SNS topic
    action: cloudformation
      stack_name={{ stack_name }}-sns-email-topic
        AutoScaleSNSTopic: "{{snsdest}}"
    register: asgsns

The 'args:' section contains the values we want to pass in to the template. Here we're only passing a single value that we defined earlier in the 'vars:' section. We'll see more complicated examples of this later. We also register the output from the CloudFormation action. This includes any values we specify as "Outputs" in the template and provides a nice way to deliberately define what we're exposing from our template. The alternative is to pull out arbitrary values from a given resource created in a previous stack but that's a hefty breach of encapsulation and will often bite you later when the templates change.

The Create Security Groups CloudFormation task doesn't really have anything interesting from an Ansible perspective, we run it, create the repos and gather the outputs using 'register' for use in our next template.

  - name: Create Security Groups
    action: cloudformation
      stack_name={{ stack_name }}-security-groups
    register: secgrp

The 'Create Webapp' example below shows most of the basic CloudFormation resource features in a single task. We use variables defined at the start of the playbook to reduce duplication of literal strings. We prefix the stack names to allow multiple developers to each build full sets of stacks without duplicate stack name conflicts while keeping grouping simple in the AWS web dashboard.

  - name: Create Webapp
    action: cloudformation
      stack_name={{ stack_name }}-webapp
        Owner: "{{ owner }}"
        AMIId: "{{ ami_id }}"
        KeyName: "{{ keyname }}"
        AppServerFleetSize: 1
        ASGSNSArn:             "{{ asgsns['stack_outputs']['EmailSNSTopicARN']      }}"
        WebappSGID:            "{{ secgrp['stack_outputs']['WebappSGID']            }}"
        ElasticacheClientSGID: "{{ secgrp['stack_outputs']['ElasticacheClientSGID'] }}"

In the args section we also use the return values from our previous stacks. The nested value access is a little verbose but it's easy to pickup and being able to see all the possible values when running Ansible under debug mode makes things a lot easier. We also had the need to pull down output values from stacks created outside of Ansible, so I wrote a simple Ansible CloudFormation lookup plugin.

So what does Ansible gain us as a stack management tool? In terms of raw CloudFormation it provides a nice way to remove boilerplate literals from each stack and define them once in the 'vars' section. The ability to register the output from a stack and then use it later on is an essential one for this kind of stack building and retrieving existing values as a pythonish hash is much easier than doing it on the command line. As for added power, it should be easier to implement AWS functionality that's currently missing from CloudFormation as an Ansible module than a CloudFormation external resource (although more on that when I actually write one) and performing other out of band tasks, letting your ticketing system know about a new stack for example, is a lot easier to integrate into Ansible than trying to wrap the cli tools manually.

I've been using Ansible for stack management in a project that involves over a dozen separate moving parts for the last month and so far it's been working fine with minimal pain.

Posted: 2014/03/25 23:38 | /tools | Permanent link to this entry

Sat, 22 Mar 2014

Project Book Pages

I've been doing my usual quarterly sweep of the always too full bookshelves and hit the usual dilemma of what to keep, what to donate to charity and what to recycle. Among the technical books in this batch is the 'Sendmail Cookbook', something I've always kept as a good luck charm to ward off the evil of needing to work with mail servers with m4 based configuration languages.

Sendmail is one of those projects that I've not kept up with over the years. I have no idea how much has changed since the book was published over a decade ago, 2003 in this case, so I don't know if this is a useful book to pass on or if it's dangerously out of date and should be removed from circulation. It'd be handy if the larger projects maintained a page of books related to the project and a table of how relevant the material is in relation to different versions.

This would not only help me prune my shelves of older, now out of date books, but would help people new to a project pick books that were still relevant for the versions they need to learn.

Posted: 2014/03/22 15:30 | /books | Permanent link to this entry

Mon, 17 Mar 2014

Managing CloudFormation Stacks With Cumulus

Working with multiple, related CloudFormation stacks can become quite taxing if you only use the native AWS command line tools. Commands start off gently -

cfn-create-stack dwilson-megavpc-sns-emails --parameters "AutoScaleSNSTopic=testy@example.org" \ 
  --template-file location/sns-email-topic.json

- but they quickly become painful. The two commands below each create stacks that depend on values from resources that have been defined in a previous stack. You can spot these values by their unfriendly appearance, such as 'rtb-9n0tr34lac55' and 'subnet-e4n0tr34la'.

# Add the bastion hosts template

cfn-create-stack dwilson-megavpc-bastionhosts --parameters \ 
KeyName=dwilson; \
" --template-file bastion.json

# create the web/app servers
cfn-create-stack dwilson-megavpc-webapps --parameters \
" --template-file location/webapps.json

When building a large, multi-tier VPC you'll often find yourself needing to extract output values from existing stacks and pass them in as parameters to dependent stacks. This results in a lot of repeated literal strings and boilerplate in your commands and will soon cause you to start doubting your approach.

The real pain came for us when we started adding extra availability zones for resilience. A couple of my co-workers were keeping their stuff running with bash and python + boto but the code bases were starting to get a little creaky and complicated and this seemed like a problem that should have already been solved in a nice, declarative way. It was about the point when we decided to add an extra subnet to a number of tiers that I caved and went trawling through github for somebody else's solution. After some investigation I settled on Cumulus as the first project to experiment with as a replacement for our ever growing, hand hacked, creation scripts. To pay Cumulus the proper respect it did make life a lot easier at first.

The code snippets below show an example set of stacks that were converted over from raw command lines like the above to Cumulus yaml based configs. First up we have the base declaration and a simple stack definition.

  region: eu-west-1

      cf_template: sns-email-topic.json
          value: testymctest@example.org

Each of the keys under 'stacks:' will be created as a separate CloudFormation stack by cumulus. Their names will be prefixed with 'locdsw', taken from the first line of our example, and they'll be placed inside the 'eu-west-1' region. The configuration above will result in the creation of a stack called 'locdsw-sns-email-topic' appearing in the CloudFormation dashboard

The stacks resources are defined in the template specified in cf_template. Our example does not depend on existing stacks and takes a single parameter, AutoScaleSNSTopic, with a value of 'testymctest'. Cumulus has no support for variables so you'll find yourself repeating certain parameters, like ami id and key id, throughout the configuration.

For a while we had an internal branch that treated the CloudFormation templates as jinja2 templates. This enabled us to remove large amounts of duplication inside individual templates. These changes were submitted upstream but one of the goals of the Cumulus project is that the templates it manages can still be used by the native CloudFormation tools, so the patch was (quite fairly) rejected.

Let's move on to the second stack defined in our config. The point of interest here is the addition of an explicit dependency on the sns-email-topic stack. Note that it's not referred to using the prefixed name, which can be a point of confusion for new users.

      cf_template: security-groups.json
        - sns-email-topic

Finally we move on to an example declaration of a larger stack. The interesting parts of which are in the params section.

      cf_template: webapp.json
        - sns-email-topic
        - security-groups
          value: 1
          value: dwilson
          value: ami-n0tr34l
          value: dwilson
          source: sns-email-topic
          type: output
          variable: EmailSNSTopicARN
          source: security-groups
          type: output
          variable: WebappSGID

The webapp params section contains two different types of values. Simple ones we've seen before, 'Owner' and 'AMIId' for example, and composite ones that reference values that other stacks define as outputs. Let's look at ASGSNSArn in a little more detail.

    source: sns-email-topic
    type: output
    variable: EmailSNSTopicARN

Here, inside the webapp stack declaration, we look up a value defined in the output of the previously executed sns-email-topic template. From the CloudFormation Outputs for that template we retrieve the value of EmailSNSTopicARN. We then pass this to the webapp.json template as the ASGSNSArn parameter on stack creation. If you need to pull a parameter in from an existing stack that was created in some other way you can specify it as 'source: -fullstackname'. The '-' makes it an absolute name lookup, cumulus won't prefix the stackname with locdsw for example.

Cumulus met a number of my stack management needs, and I'm still using it for older, longer lived stacks such as monitoring, but because of its narrow focus it began to feel restricting quite quickly. I've started to investigate Ansible as a possible replacement as it's a more generic tool and I'm in need of flexibility that'd feel quite out of place in cumulus.

In terms of day to day operations the main issues we hit included the need to turn on ALL the debug, both cumulus and boto, to see why stack creations failed. A lot of the AWS returned errors were being caught and replaced by generic, unhelpful error messages at any filter level greater than debug. Running under debug results in a LOT of output, especially when boto is idle polling, waiting for the stack creation to complete so it can begin the next one. The lack of any variables or looping was also an early constraint. The answers to this seemed to include pushing the complexity down to the templates and writing large mapping sections, increasing duplication of literals between templates and a lot of FN::FindInMaps maps. The second approach was to have multiple configs. This was less than ideal due to the number of permutations, environment (dev, stage, live), region and in development which developer was using it. The third option, a small pre-processor that expanded embedded jinja2 in to a CloudFormation template, added another layer between writing and debugging and so didn't last very long.

If you're running a small number of simple templates then Cumulus might be the one tool you need. For us, Ansible seems to be a better fit, but more about that in the next post.

Posted: 2014/03/17 20:01 | /tools | Permanent link to this entry

Tue, 04 Mar 2014

Abstracting CloudFormation IAM with Nested Stacks

Once we started extracting applications into different logical CloudFormation stacks and physical templates, we began to notice quite a lot of duplication in our json when it came to declaring IAM rules. Some of our projects store their puppet, hiera and rpm files in restricted S3 buckets so allowing stacks access to them based upon environment, region, stack name and other criteria quickly becomes quite long-winded. After looking at a couple of dozen application templates and finding that over 30% of the json was IAM based it was time to find a different approach.

One of the CloudFormation techniques I'd seen mentioned but never used before was nested CloudFormation stacks. This allows you to define an entire stack as just another resource in your template. Here's some example json that does this:

  "Resources" : {

    "IAMRolesStack" : {
      "Type" : "AWS::CloudFormation::Stack",
      "Properties" : {
        "TemplateURL" : "https://s3-eu-west-1.amazonaws.com/my-iam-rules/projectname/iam-roles-20140301.json",
        "Parameters" : {
          "Stack": "testy-webapp",
          "Type":  "webapp",
          "App":   "tinyess",
          "Env":   { "Ref" : "DeploymentEnvironment" }


You can see that a stack is declared in the same manner as all other resources. The 'TemplateURL' property must point to a URL that hosts a complete, valid CloudFormation template. This allows you to develop the nested stack in the same way as you'd progress your actual application templates and test it in isolation. For my experiments I found it easiest to store them in S3 under a basic hierarchy with a little versioning to allow multiple versions of the IAM rules to be in use at once across the stacks. The other properties in the example are 'Parameters'. These are passed to the sub-stack at creation time as actual parameters and are what makes this approach so flexible and powerful.

Inside the nested stack template we add define a AWS::IAM::Role, an AWS::IAM::InstanceProfile and a number of AWS::IAM::Policy types that are abstracted to only allow access for one app/environment combination at a time. We do this using the parameters we pass in as values at different levels of the hierarchy. This way we can ensure that every application using a specific version of the IAM roles gets exactly the same permissions while not bulk pasting it into each applications json template or hard coding any of the application specific values. It's also worth noting that as stacks are given "CloudFormationed" IDs that include some randomness you can have multiple versions of the nested stack at once with no overlap or conflicts between apps.

You can see a small extract from our sample IAM template, with the parameters interpolated into the path, here -

  "SecretPolicy": {
    "Type": "AWS::IAM::Policy",
    "Properties": {
      "PolicyDocument": {
        "Statement": [ {
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
              { "Fn::Join" : [ "", [
                { "Ref" : "App"   }, ".",
                { "Ref" : "Type"  }, ".",
                { "Ref" : "Env"   }, ".",
                { "Ref" : "Stack" }, ".yaml"
              ] ] },

              { "Fn::Join" : [ "", [
                { "Ref" : "App"  }, ".",
                { "Ref" : "Type" }, ".yaml"
              ] ] },

Now that we've declared and created the nested stack let's use the IamInstanceProfile it created in the auto scaling launch configuration that lives in the containing stack.

    "AppServerFleetLaunchConfig" : {
      "Type" : "AWS::AutoScaling::LaunchConfiguration",
      "Properties" : {
        "IamInstanceProfile": { "Fn::GetAtt" : [ "IAMRolesStack", "Outputs.InstanceProfile" ] },

Accessing nested stack outputs is as simple as a call to Fn::GetAtt with the resource name of the nested stack as the first argument (IAMRolesStack as seen in our first code snippet) and the outputs name as part of the second.

So what did we get from this? A few very worth while things. We removed a LOT of boilerplate from all our application templates. This also makes CloudFormation application templates easier to create as only a few people need in-depth knowledge of our IAM rules and bucketing scheme, application templates can focus on the application. It's easier to confirm that applications have the same access rights based on the S3 bucket used, rather than diffing through lots of subtly different IAM resources.

I'm using this technique on a couple of medium size projects at the moment and so far it seems like a good way to overcome IAM json spaghetti with no large drawbacks.

Posted: 2014/03/04 22:38 | /tools | Permanent link to this entry

Sat, 01 Mar 2014

Structured Facts with Facter 2

Structured facts in facter had become the Puppet communities version of 'Duke Nukem Forever', something that's always been just around the next corner. Now that the facter 2.0.1 release candidate is out you can finally get your hands on an early version and do some experimentation.

First we grab a version of facter 2 that supports structured facts from puppetlabs -

 # our play ground
 mkdir /tmp/facter && cd /tmp/facter

 # grab the code
 wget https://downloads.puppetlabs.com/facter/facter-2.0.1-rc1.tar.gz

 cd facter-2.0.1-rc1/

 # check facter runs from our expanded archive
 ruby -I lib bin/facter

This is the part where we can be underwhelmed, it's all still flat. Don't let the lack of nested facts dishearten you though. The Puppetlabs people have done all the hard work of implementing structured facts support, they've just not converted any showcase facts over yet. Instead of waiting for an official structured fact lets add our own and have a little play.

As we're experimenting with a throw away environment we'll drop the structured fact directly in to our expanded archive. In a real environment you'd never do this, you'd either use FACTERLIB or deploy your modules properly with puppet as Luke intended.

 # install the plugin
 wget https://raw.github.com/deanwilson/unixdaemon-puppet_facts/master/lib/facter/yumplugins.rb -O lib/facter/yumplugins.rb

 # and run it
 ruby -I lib bin/facter  yumplugins


Well, our first TODO will be to determine how to show structured facts as strings, but we'll defer that for now as we really want to see some deep nesting. Assuming you're on a RedHat osfamily host you can run facter with the yaml output, otherwise you'll have to settle for the sample outputs below:

 $ ruby -I lib bin/facter yumplugins --yaml

  - blacklist
  - langpacks
  - presto
  - refresh-packagekit
  - whiteout
  - langpacks
  - presto
  - refresh-packagekit
  - blacklist
  - whiteout

  # and now try it as json

 $ ruby -I lib ./bin/facter yumplugins -j
  "yumplugins": {
    "plugin": [
    "enabled": [
    "disabled": [

Success! Structured fact output! From (nearly) Puppet! Of course, this is only a release candidate for Facter 2 so we're not production ready yet but as a taster of what's coming and a way to get ahead and start converting your own facts it's a lovely, and amazingly overdue, gift.

As for writing structured facts, as you can see from my structured yumplugins fact example there's no difference between a structured and an unstructured one apart from the value it returns.

Posted: 2014/03/01 16:04 | /tools/puppet | Permanent link to this entry

Automatic CloudFormation Template Validation with Guard::CloudFormation

One of the nice little conveniences I've started to use in my daily work with Amazon Webservices CloudFormation is the Guard::CloudFormation ruby gem.

The Guard gem "is a command line tool to easily handle events on file system modifications" which, simply put, means "run a command when a file changes". While I've used a number of different little tools to do this in the past, Guard presents a promising base to build more specific test executors on so I've started to integrate it in to more aspects of my work flow. In this example I'm going to show you how to validate a CloudFormation template each time you save a change to it.

The example below assumes that you already have the AWS CloudFormation command line tools installed, configured and available on your path.

# our example, one resource, template.

$ cat example-sns-email-topic.json 
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description" : "Creates an example SNS topic",

  "Parameters" : {
    "AutoScaleSNSTopic": {
      "Description": "Email address for notifications.",
      "Type": "String",
      "Default": "testymctest@example.org"

  "Resources" : {
    "EmailSNSTopic": {
      "Type" : "AWS::SNS::Topic",
      "Properties" : {
        "DisplayName" : "Autoscaling notifications for Location service",
        "Subscription": [ {
          "Endpoint" : { "Ref" : "AutoScaleSNSTopic" },
          "Protocol" : "email"
        } ]

# now, do a manual run to ensure all the basics are working

$ cfn-validate-template --template-file example-sns-email-topic.json
PARAMETERS  AutoScaleSNSTopic  testymctest@example.org  false  Email address for notifications.

Now that we have confirmed our CloudFormation tools are working we can sprinkle some automation all over it.

# install the required gems

gem install guard guard-cloudformation

# and then create a basic Guardfile
# in this case we watch all .json files in the current directory

cat << 'EOC' > Guardfile

guard "cloudformation", :templates_path => ".", :all_on_start => false do


# run guard

$ guard 
10:07:49 - INFO - Guard is using NotifySend to send notifications.
10:07:49 - INFO - Guard is using TerminalTitle to send notifications.
10:07:49 - INFO - Guard is now watching at '/home/cfntest/.../cloudformations/location'
[1] guard(main)> 

Now that guard is up and running open up a second terminal to the directory you've been working in. We'll now make a couple of changes and watch Guard in action. First we'll make a small change to the text that shouldn't break anything.

# run the sed command to change the email address - shouldn't break

$ sed -i -e 's/testymctest@example.org/test@example.org/' example-sns-email-topic.json

# in the term running Guard we see -

10:12:31 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
PARAMETERS  AutoScaleSNSTopic  test@example.org  false  Email address for notifications.

On my desktop the validation output is a lovely terminal green and I also get a little pop-up in the corner telling me the validate was successful. Leaving Guard open, we'll run a breaking change.

# run the sed command to remove needed quotes - this will not end well

$ sed -i -e 's/"Ref" :/Ref :/' example-sns-email-topic.json

# in the term running Guard we see -

10:13:51 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
cfn-validate-template:  Malformed input-Template format error: JSON not well-formed. (line 19, column
       [--template-file  value ] [--template-url  value ]  [General Options]
For more information and a full list of options, run "cfn-validate-template --help"
FAILED: example-sns-email-topic.json

The 'FAILED: example-sns-email-topic.json' line is displayed in less welcome red, the dialog box pops up again and we know that our last change was incorrect. While this isn't quite as nice as having vim running the validate in the background and taking you directly to the erroring line it's a lot easier to plumb in to your tool chain and gives you 80% of the benefit for very little effort. For completeness we'll reverse our last edit to fix the template.

# sed command to fix the template

$ sed -i -e 's/Ref :/"Ref" :/' example-sns-email-topic.json

# in the term running Guard we see -

10:22:42 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
PARAMETERS  AutoScaleSNSTopic  test@example.org  false  Email address for notifications.

One last config option that's worth noting is ':all_on_start => false' from the Guardfile. If this is set to true then, as you'd expect by the name, all CloudFormation templates that match the watch will be validated when Guard starts. I find the validates to be quite slow and I often only dip in to a couple of templates so I set this to off. If you spend more focused time working on nothing but templates then having this set to 'true' gives you a nice early warning in case someone checked in a broken template. Although your git hooks shouldn't allow this anyway. But that's a different post.

After reading through the validate errors of a couple of days work it seems my most common issue is from continuation commas. It's just a shame that CloudFormation doesn't allow trailing commas everywhere.

Posted: 2014/03/01 10:44 | /tools | Permanent link to this entry

Wed, 12 Feb 2014

Config Management Camp: Juju Surprises

One of the biggest surprises of Config Management Camp 2014 for me was how interesting Canonicals orchestration management tool, Juju has become. Although I much preferred the name 'Ensemble'.

I attended the Juju session in an attempt to keep myself out of the Puppet room and was pleasantly surprised at how much Juju had progressed since I last looked at it. Rather than being another config management solution it allows you to model your systems using "charms", which can be implemented using anything from a bash script to a set of chef/puppet cookbooks/modules, and instead focuses on ensuring that they run across your fleet in a predictable way while enforcing dependencies, even over multiple tiers, no matter how many tools you choose to use underneath.

Listening to the presentations it seems Juju has some very well thought out parts. Multiple callback hooks, that are triggered on state changes, are used as an orchestration back channel between different hosts and the services they provide flowed nicely in the demos. The web dashboard was very polished and had some very shiny canvas magic that could be borrowed in other tools. I also liked the command line interface for linking different tiers and associating supporting roles, such as tying wordpress instances to a mysql back end. There is also some cloud provider performance abstraction code at work where you can request a certain amount of resources and Juju will map that to the closest instance type in which ever provider you're currently using.

I was only in the room for a couple of the talks but both the Canonical staff were a credit to their company. The material was well presented, they managed to answer all the audiences questions and you get the impression that they'd be a nice project to work with. Hopefully I'll have a chance to play with the platform some more in the future.

Posted: 2014/02/12 23:52 | /tools | Permanent link to this entry

Mon, 13 Jan 2014

Ansible Configuration Management Book - Short Review

I'm still new to Ansible and while it's been interesting seeing how people are starting to use the tool, picking up bits and pieces from different blog posts is a little too hit and miss for my learning needs. When I spotted Ansible Configuration Management (PacktPub) I decided to take the plunge and see if it could provide me with a more consistent introduction. And it did.

This book makes an ideal first stop for anyone wanting to learn Ansible. While it's a short book (92 pages and even less than that of actual content) it provides a very good introduction and overview of at least your first months experience of Ansible. While none of its coverage is going to be the only coverage of a subject you'll ever need it introduces enough of the concepts and features to be the best starting guide for Ansible I've seen so far. I found that each chapter filled in a number of gaps in my understanding of how Ansible should be used.

If you're looking at introducing Ansible to your team this book is far from the worst way to do it. Its coverage is broad enough that you'll probably get a few re-reads out of it as you bring more of your infrastructure under Ansible control and start to evolve your needs from basic playbooks to more advanced role composition. It's worth noting that this isn't a cookbook, it's not going to hand hold you through using each of the built in modules. For the more experienced sysadmins looking for a quick way to learn Ansible this is a boon as it keeps the page count down.

I'd liked to have seen more coverage of extending Ansible, the last chapter provides a basic introduction but it's not enough for what I need but this'd be a good subject for a second book once the testing tool chain and such as progressed to a more mature place. Score - 7/10

Posted: 2014/01/13 21:50 | /books | Permanent link to this entry

Sat, 11 Jan 2014

Ansible CloudFormation Lookup Plugin

As the Ansible/AWS investigations continue I had the need to lookup outputs from existing CloudFormation stacks. I spent ten minutes reading through the existing lookup plugins and came up with the Ansible CloudFormation Lookup Plugin.

I'm not sure this is going to be our final solution. Michael DeHaan suggested that moving to a fact plugin might be better in terms of cleaner usage and easier testing, so I'm at the least going to implement a trial version of that. I was quite surprised at how easy writing an Ansible lookup plugin was though, even for someone with my limited python skills.

Once you've downloaded and installed the plugin, using it in your templates is as simple as

    {{ lookup('cloudformation', 'region/stackname/output/outputname') }}

    # and an actual example:
    {{ lookup('cloudformation', 'eu-west-1/unixdaemon-natinstance/output/nat_group_id') }}

It uses boto under the covers and expects to find your credentials as environmental variables. This is only a tiny chunk of code but it's allowed us to continue on with the evaluations while gaining a little more comfort in our ability to extend Ansible to suit our needs.

Posted: 2014/01/11 12:49 | /tools/ansible | Permanent link to this entry

Mon, 06 Jan 2014

Learning AWS OpsWorks - Short Book Review

I picked up a copy of Learning AWS OpsWorks during the PacktPub holiday sale. It was cheap, short and covered a AWS product that I've never had need to dig in to and knew very little about.

The book takes you through creating a basic stack, the layers inside it and deploying an application to managed instances. Its coverage is very high level and doesn't really go beyond a cursory explanation of the services used. As you'd expect from the page count, it doesn't delve in to either the Amazon services you use or how to make chef do your bidding, instead sticking to its focus and giving you just enough information to get the example working and not much else. It's worth mentioning that the console screen shots are already out of date so you need to do a little exploring on your own as you follow the steps.

Learning AWS OpsWorks is a brief but informative high level overview of AWS OpsWorks and how you'd use it to create and manage basic stacks. I don't think it's worth the full price, a Safari account would be quite useful here. It's also very unlikely you'll need to read it more than once so it's not great value for money. It does however present the concepts in an easy to understand way, so if you're looking to pick up basic OpsWorks in a big rush it's the only competition to the official docs.

Posted: 2014/01/06 10:26 | /books | Permanent link to this entry

Fri, 03 Jan 2014

Introducing CloudFormation Conditionals

Back in November 2013 Amazon added a much requested feature to CloudFormation, the ability to conditionally include resources or their properties in to a stack. As an example I'm currently using this as a small cost saving measure to ensure only my production RDS instances have PIOPs applied to them while being able to build each environment from a single template.

CloudFormation Conditionals live in their own section of a CloudFormation template. In the example below we show two ways of setting values for use in a condition. A simple string comparison and a longer, composite comparison that includes an or. Each of these are based on a provided parameter.

  "AWSTemplateFormatVersion": "2010-09-09",

  "Parameters" : {
    "DeploymentEnvironment" : {
      "Default": "dev",
      "Description" : "The environment this stack is being deployed to.",
      "Type" : "String",
      "AllowedValues" : [ "dev", "stage", "prod" ],
      "ConstraintDescription" : "Must be one of the following environments - dev | stage | prod"

  "Conditions": {
    "InProd": { "Fn::Equals" : [ { "Ref" : "DeploymentEnvironment" }, "prod" ] },

    "NotInProd": {
      "Fn::Or" : [
        { "Fn::Equals" : [ { "Ref" : "DeploymentEnvironment"}, "stage" ] },
        { "Fn::Equals" : [ { "Ref" : "DeploymentEnvironment"}, "dev" ] }

Our first example is the conditional inclusion of an entire resource -

  "Resources" : {

    "WebappSG": {
      "Type": "AWS::EC2::SecurityGroup",
      "Condition" : "NotInProd",
      "Properties": {
        "GroupDescription": "Location webapp servers",
        "SecurityGroupIngress": [ {
          ... snip ...
        } ]

The key part of this snippet is the 'Condition' line. If the value on the right hand evaluates to true the resource is created when the template runs. If the condition is false the entire resource is skipped. As a second example we'll show how to conditionally include a single property value. In this case the 'Iops' property of a AWS::RDS::DBInstance resource.

  "Resources" : {

    "MySQL" : { 
      "Type" : "AWS::RDS::DBInstance",
      "DeletionPolicy" : "Snapshot",
      "Properties" : { 
        "DBName"               : "testy",
        "AllocatedStorage"     : "5",
        "DBInstanceClass"      : "db.m2.4xlarge",
        "MasterUsername"       : "testy",
        "MasterUserPassword"   : "testy",
        "DBSecurityGroups"     : [ "rds" ],
        "Engine"               : "MySQL",
        "EngineVersion"        : "5.6",
        "MultiAZ"              : "true",
        "Iops" : {
          "Fn::If" : [ "InProd",
            { "Ref" : "AWS::NoValue" }

If InProd is true the Iops property is included and set to 1000. If the value of InProd is false then the special value of 'AWS::NoValue' is returned. This causes the property itself to be completely excluded.

CloudFormation Conditions are quite a new feature, and I was a little late in discovering it, so we've only just started to use them in our templates. They are however worth learning about as they provide a flexible new way to structure your templates.

Posted: 2014/01/03 12:51 | /cloud | Permanent link to this entry

Wed, 01 Jan 2014

Pragmatic Investment Plan - 2013 / 2014

I changed jobs midway through 2013 and quite quickly discovered that I'd been a little too buried in my role and not been keeping up other parts of my technical interests. As a first step I decided to put a very basic Pragmatic Investment Plan in place. Mostly as a simple way to ensure I actually started to get involved in non-work things again.

Firstly I set myself the task of recording which books I actually read. I only counted books I actually finished and ended up reading nearly 90 books in the last year. Totalling over 22000 pages. In 2014 I'm going to set myself a minimum goal of 15000 pages to ensure I actually make the time to both finish books, and read non-technical material.

In terms of technical reading I wanted to ensure I read an average of one a month so I set myself a goal of 12 books. I managed to finish 15 in the end so I'm happy with that. In the last six months I've also tech reviewed 4 books, all of which are due out in 2014. I've not done a proper tech review of a book for years and it made a nice change of pace.

I decided to try and add some content to my websites. And failed. Over the year I managed to publish a measly 29 blog posts to Unixdaemon.net, less than one a week on average. This year I'm hoping to make at least 52. As for Puppet Cookbook there's been a set of issues with the hosting and site generation that I've not had the time or energy to resolve so it's been sitting idle. Something I really need to fix in 2014 as parts of the sample code are not puppet-lint compliant, which annoys me. I did manage to get a mention in Ansible Weekly from one post so I'm slightly happier than I should be considering my output.

I think that having nothing to talk about for nearly months at a time is a sign that I'm approaching some tasks wrongly - especially considering I'm now working for a company that is a lot more open about discussing our code, techniques and projects. I didn't even mention a number of tools that I've published that might be useful to other people. At the very least a monthly release announcement would have kept better track of what I've been working on and where it lives.

In terms of getting out and about I had a very mixed bag last year. My attendance at user group events was very poor. A London Devops and the CloudCamp London SDN special were it for the year. I only managed to get to the latter as Greg Ferro was speaking and as a big fan of the Packet Pushers podcast the opportunity was too good to miss. My conference attendance was much better -

I was very fortunate to make it to both Devops Days in Mountain View and Velocity SF. I had a chance to meet a lot of the people I'd only spoken to online and meet some of my new co-workers, as well as catch up with old friends. The travel was a pain but it worked out to be well worth it. I was impressed by a number of the Netflix team I had a chance to speak with and finally got to see Andrew Shafer host the ignites.

Due to health issues I missed both Puppetcamps in London, for which I felt very slack, and donated my monitorarma EU ticket to the Centos Project. I also had to cancel a number of other conference visits. In 2014 I need to be a lot pickier about my travel so I'm resigned to missing a lot of great content. My first conferences of the year will be FOSDEM and Config Management Camp as I've been to Belgium enough times to feel safe(ish) with the travel.

I set myself the goal of getting at least two of my internal tools cleaned up and pushed to github. I managed to get a lot more code than this public but large parts of it were either very beta tools or small examples. I've started to deprecate a lot of my own code and adopt the functionally closest open source project instead to cut down my maintenance costs. I've sent a number of patches upstream and I assume I'll continue if not increase this in 2014.

I also had the lack of certifications on my CV pointed out to me a couple of times this year. I don't have a strong academic background (understatement warning!) and it was mentioned that I could balance this with some relevant certs. I'm not a keen test taker so I only managed to bag a couple of basic ones, the very intro level VCA-DCV and the newish CompTIA Cloud+. I'm unsure of the best direction to take with this in the future - maybe another RedHat specialisation would complement my role - so for now I'll just say I'd like another one or two as CV fodder.

In general I'll probably start the year with the similar basic categories in my 2014 PiP and refine them in to more focused monthly / quarterly batches as I discover what 2014 has planned for me. I don't feel comfortable trying to plan for a whole 12 months ahead but if I leave just the section headings I'll end up bouncing around too much. I've had some interesting chats with Paul Nasrat about this and while our approaches are quite different it's been great to have someone to talk this over with.

Well, that's my basic review of my 2013. I hope you all have a wonderful 2014 and thank ye kindly for visiting my site. Dean Wilson.

Posted: 2014/01/01 20:51 | /career | Permanent link to this entry

books career cloud events firefox geekstuff linux meta misctech movies nottech perl programming security sites sysadmin tools tools/ansible tools/commandline tools/gui tools/network tools/online tools/puppet unixdaemon

Copyright © 2000-2013 Dean Wilson :: RSS Feed