November 20143
October 20143
September 20142
July 20141
March 20146
February 20141
January 20145
December 20135
November 20132
October 20134
August 20134
July 20132
Full Archives

Wed, 26 Nov 2014

Use Ansible to Expand CloudFormation Templates

After a previous comment about " templating CloudFormation JSON from a tool higher up in your stack" I had a couple of queries about how I'm doing this. In this post I'll show a small example that explains the work flow. We're going to create a small CloudFormation template, with a single Jinja2 embedded directive, and call it from an example playbook.

This template creates an S3 bucket resource and dynamically sets the "DeletionPolicy" attribute based on a value in the playbook. We use a file extension of '.json.j2' to distinguish our pre-expanded templates from those that need no extra work. The line of interest in the template itself is "DeletionPolicy": "{{ deletion_policy }}". This is a Jinja2 directive that Ansible will interpolate and replace with a literal value from the playbook, helping us move past a CloudFormation Annoyance, Deletion Policy as a Parameter. Note that this template has no parameters, we're doing the work in Ansible itself.

    $ cat templates/deletion-policy.json.j2 

      "AWSTemplateFormatVersion": "2010-09-09",

      "Description": "Retain on delete jinja2 template",

      "Resources": {

        "TestBucket": {
          "DeletionPolicy": "{{ deletion_policy }}",
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "my-test-bucket-of-54321-semi-random-naming"


Now we move on to the playbook. The important part of the preamble is the deletion_policy variable, where we set the value for later use in the template. We then move on the the 2 essential and one house keeping task.

    $ cat playbooks/deletion-policy.playbook 
    - hosts: localhost
      connection: local
      gather_facts: False 
        template_dir: "../templates"
        deletion_policy: "Retain" # also takes "Delete" or "Snapshot"

Because the Ansible CloudFormation module doesn't have an inbuilt option to process Jinja2 we create the stack in two stages. First we process the raw jinja JSON document and create an intermediate file. This will have the directives expanded. We then run the CloudFormation module using the newly generated file.

  - name: Expand the CloudFormation template for future use.
    local_action: template src={{ template_dir }}/deletion-policy.json.j2 dest={{ template_dir }}/deletion-policy.json

  - name: Create a simple stack
    cloudformation: >
      template={{ template_dir }}/deletion-policy.json

The final task is an optional little bit of house keeping. We remove the file we generated earlier.

  - name: Clean up the local, generated, file
    file: name={{ template_dir }}/deletion-policy.json state=absent

We've only covered a simple example here but if you're willing to commit to preprocessing your templates you can add a lot of flexibility, and heavily reduce the line count, using techniques like this. Creating multiple subnets in a VPC, adding route associations and such is another good place to introduce these techniques.

Posted: 2014/11/26 13:39 | /cloud | Permanent link to this entry

Mon, 24 Nov 2014

CloudFormation Annoyance: Deletion Policy as a Parameter

You can create some high value resources using CloudFormation that you'd like to ensure exist even after a stack has been removed. Imagine being the admin to accidently delete the wrong stack and having to watch as your RDS master, and all your prod data, slowly vanishes in to the void of AWS reclaimed volumes. Luckily AWS provides a way to reduce this risk, the DeletionPolicy Attribute. By specifying this on a resource you can ensure that if your stack is deleted then certain resources survive and function as usual. This also helps keep down the number of stacks you have in the "DELETE_FAILED" stage if you try and remove a shared security group or such.

    "Resources": {

      "TestBucket": {
        "DeletionPolicy": "Retain",
        "Type": "AWS::S3::Bucket",
        "Properties": {
          "BucketName": "MyTestBucketOf54321SemiRandomName"


Once you start sprinkling this attribute through your templates you'll probably feel the need to have it vary in staging and prod. While it's a lovely warm feeling to have your RDS masters in prod be a little harder to accidently kill you'll want a clean tear down of any frequently created staging or developer stacks for example. The easiest way to do this is to make the DeletionPolicy take its value from a parameter, probably using code like that below.

      "AWSTemplateFormatVersion": "2010-09-09",
      "Description" : "Retain on delete test template",

      "Parameters" : {

        "RetainParam": {
          "Type": "String",
          "AllowedValues": [ "Retain", "Delete", "Snapshot" ],
          "Default": "Delete"

      "Resources": {

        "TestBucket": {
          "DeletionPolicy": { "Ref" : "RetainParam" },
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "MyTestBucketOf54321SemiRandomName"


Unfortunately this doesn't work. You'll get an error that looks something like cfn-validate-template: Malformed input-Template format error: Every DeletionPolicy member must be a string. if you try to validate your template (and we always do that, right?).

There are a couple of ways around this, the two I've used are: templating your CloudFormation json from a tool higher up in your stack, Ansible for example. The downside is your templates are unrunable without expansion. A second approach is to double up on some resource declarations and use CloudFormation Conditionals. You can then create the same resource, with the DeletionPolicy set to the appropriate value, based off the value of a parameter. I'm uncomfortable using this in case of resource removal on stack updates if the wrong parameters are ever passed to your stack. I prefer the first option.

Even though there are ways to work around this limitation it really feels like it's something that' Should Just Work' and as a CloudFormation user I'll be a lot happier when it does.

Posted: 2014/11/24 13:22 | /cloud | Permanent link to this entry

Sat, 22 Nov 2014

AWS CloudFormation Parameters Tips: Size and AWS Types

While AWS CloudFormation is one of the best ways to ensure your AWS environments are reproducible it can also be a bit of an awkward beast to use. Here are a couple of simple time saving tips for refining your CFN template parameters.

The first one is also the simplest, always define at the least a MinLength property on your parameters and ideally an AllowedValues or AllowedPattern. This ensures that your stack will fail early if no value is provided. Once you start using other tools, like Ansible, to glue your stacks together it becomes very easy to create a stack parameter that has an undefined value. Without one of the above parameters CloudFormation will happily use the null and you'll either get an awkward failure later in the stack creation or a stack that doesn't quite work.

The second tip is for the parameters type property. While it's possible to use a 'type' of 'String' and an 'AllowedPattern' to ensure a value looks like an AWS resource, such as a subnet id, the addition of AWS- specific types, available from November 2014, allows you to get a lot more specific:

  # note the value of "Type"
  "Parameters" : {

    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "Default" : "i-am-the-gate-keeper" 


This goes one step beyond 'Allowed*' and actually verifies the resource exists in the users account. It doesn't do this at the template validation stage, which would be -really- nice, but it does it early in the stack creation so you don't have a long wait and a failed, rolled back, set of resources.

    # a parameter with a default key name that does not exist in aws
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "MinLength": "1",
      "Default" : "non-existent-key"

    # validate shows no errors
    $ aws cloudformation validate-template --template-body file://constraint-tester.json
        "Description": "Test an AWS-specific type constraint", 
        "Parameters": [
                "NoEcho": false, 
                "Description": "Name of an existing EC2 KeyPair", 
                "ParameterKey": "KeyName"
        "Capabilities": []

    # but after we start stack creation and check the dashboard
    # CloudFormation shows an error as the second line in events
    ROLLBACK_IN_PROGRESS    AWS::CloudFormation::Stack      dsw-test-sg
    Parameter value non-existent-key for parameter name KeyName
    does not exist. . Rollback requested by user.

Neither of these tips will prevent you from making the error, or unfortunately catch them on validation. They will surface the issues much quicker on actual stack creation and make your templates more robust. Here's a list of the available AWS Specific Parameter Types, in the table under the 'Type' property and you can find more details in the 'AWS-Specific Parameter Types' section.

Posted: 2014/11/22 16:21 | /cloud | Permanent link to this entry

Thu, 09 Oct 2014

Facter: Ansible facts in Puppet

Have you ever needed to access Ansible facts from inside Puppet? well, if you ever need to, you can use the basic ansible_facts custom fact.

    # make sure you have ansible installed
    $ sudo puppet resource package ansible ensure=present

    # clone my experimental puppet fact repo
    $ git clone https://github.com/deanwilson/unixdaemon-puppet_facts.git

    # Try running the fact
    $ ruby -I unixdaemon-puppet_facts/lib/ `which facter` ansible_facts -j
      "ansible_facts": {
        "ansible_architecture": "x86_64",
        "ansible_bios_date": "04/25/2013",
        "ansible_bios_version": "RKPPT%$DSFH.86A.0457.20DF3.0425.1251",
        "ansible_cmdline": {
        ... snip ...

While it's nice to see the output in facter you need to make a small change to your config file to use them in puppet. Set stringify_facts = false in the [main] section of your puppet.conf file and you can use these new facts inside your manifests.

    $ puppet apply -e 'notify { "Form factor: ${::ansible_facts['ansible_form_factor']}": }'
    Notice: Form factor: Desktop

Would I use this in general production? No, never again, but it's a nice reminder of how easy facter is to extend. A couple of notes if you decide to play with this fact - I deliberately filter out non-ansible facts. There was something odd about seeing facter facts nested inside Ansible ones inside facter. If you foolishly decide to use this heavily, and you're running puppet frequently, adding a simple cache for the ansible results might be worth looking at to help your performance.

Posted: 2014/10/09 20:34 | /tools/puppet | Permanent link to this entry

Puppet 3.7 File Function Improvements

Puppet's always had a couple of little inconsistencies when it comes to the file and template functions. The file function has always been able to search for multiple files and return the contents of the first file found but it required absolute paths. The template function accepts module based paths but doesn't allow for matching on the first found file. Although this can be fixed with the Puppet Multiple Template Source Function.

One of the little niceties that came with Puppet 3.7 is an easily missed improvement to the file function that makes using it easier and more consistent with the template function. In earlier puppet versions you called file with absolute paths, like this:

  file { '/tmp/fakefile':
    content => file('/etc/puppet/modules/yourmodulename/files/fakefile')

Thanks to a code submission from Daniel Thornton (which fixes an issue that's been logged since at least 2009) you can now call the file function in the same way as you'd use template, while retaining support for matching the first found file.

  file { '/tmp/fakefile':
    content => file('yourmodulename/fakefile')

  # or

  file { '/tmp/fakefile':
    content => file("yourmodulename/fakefile.${::hostname}", 'yourmodulename/fakefile')

Although most puppet releases come with a couple of 'wow' features sometimes it's the little ones like this that adds consistency to the platform and helps cleanup and abstract your modules, that you appreciate more in the long term.

Posted: 2014/10/09 17:07 | /tools/puppet | Permanent link to this entry

Sat, 04 Oct 2014

Puppet Lint Custom Checks

In the past if you wanted to run your own puppet-lint checks there was no official, really clean way to distribute them outside of the core code. Now, with the 1.0 release of puppet-lint you can write your own, external, puppet-lint checks and make them easily distributable.

I spent a little bit of time this morning reading through the existing 3rd party community plugins and after porting a private absolute template path check over to the new system I have to say that rodjek has done an excellent job with both the ease of writing your own checks and the quality of the developer tutorial. if you have any local style rules then now's a great time to get them represented in your puppet-lint runs.

Posted: 2014/10/04 10:29 | /tools/puppet | Permanent link to this entry

Thu, 11 Sep 2014

Puppet Certified Professional 2014 Exam

A little while ago in a twitter conversation, many hops away a few of us discussed the Puppet Certified Professional exam and topic coverage. Specifically how much of it was focused on Puppet Enterprise (PE) and if it would either dissuade users of purely FOSS Puppet or heavily impact their chance of passing if they'd never used PE.

While I stand by my views I began to worry that my knowledge of the syllabus was only based on hearsay, the practice exam questions and that I was being overly harsh and possibly spreading misinformation through my own ignorance. So I booked a place and took the exam a couple of days later.

The exam is multiple choice and most questions are quite direct. While there were tricky questions I only encountered one that could be either a very subtle trick or a mistake, and I've reported that upstream and received a positive response about it being investigated. The questions I had heavily pointed towards topics that you'd have to use puppet on a semi-regular basis to know the answers to.

In terms of candidate preparation, other than the obvious choice of taking PuppetLabs training courses, I think that being comfortable with all the material in Pro Puppet and having a decent six-twelve months of hands on experience of Puppet, MCollective and Puppet DB will cover most of the scope. This also requires knowing how puppet fits together and understanding how it works, not just being able to write modules and work with the DSL. In hindsight I'd have scored higher by downloading the Puppet Enterprise VM and spending a few hours working through the GUI features. Instead I went in having never used PE and still had a decent pass. I'd also note that the practice questions mentioned above are an accurate illustration of the real exam questions format and difficulty.

As I've only just taken the exam, and the fact I have more than enough puppet experience on my CV already, I don't think the cert will add my to my employability but for people with less years of puppet and looking to validate their skills it's not a bad way to spend an hour. Doubly so if you can take the test for free at a local puppetcamp; in case you needed any more reasons to attend one.

Posted: 2014/09/11 12:36 | /tools/puppet | Permanent link to this entry

Simplifying an Online Presence

It is amazing how many small commitments and fragments of an online presence you can collect over years of being involved in different projects and user groups. I've ended up hosting planets, user group sites, submission forms (and other scripts), managing twitter announcement accounts, pushing tar balls (don't ask) and running (and owning) more domains than I could ever really want or do anything useful with. After an initial audit of how difficult it'd be to move some of my public servers I've realised that something has to change.

I've decided to take a deliberate step back and reduce my involvement in a number of projects, and my general online footprint, to levels that are comfortable and maintainable while leaving me enough time to get involved in some newer projects, technology and groups that are relevant to me. Although I slowly began the cleaning process a few months ago, initially by transferring domains and in some cases even deleting websites and removing their DNS, there's still quite a lot of cruft to trim.

Like most full time sysadmins my personal systems, which thanks to Debian and Bytemark have been in use for many years and in place release upgrades, are a lot more disorderly, and manual, than I'd accept at work or even in my home lab. A clean up like this seems to be the perfect time to move to newer, more appropriate, platforms like nginx and puppet modules (yes I have puppet code that predates modules) and replace custom nagios wrapping with serverspec and such. Some of the evolved configurations with dozens of complicated edge cases are going to be difficult to migrate and I'm trying to bring myself to just kill a number of them, even if it leaves certain links now dead. This site (unixdaemon.net) will probably be one of the biggest victims of this.

What have I learned from this audit and clean up? First, don't make open ended commitments. As an example I run one site for a group that I've not even attended for over 6 years. Secondly I no longer have the free time I once did and so it has to count for more. I need to get more proactive about handing things off that I'm no longer passionate about.

Posted: 2014/09/11 00:24 | /unixdaemon | Permanent link to this entry

Wed, 23 Jul 2014

Ansible AWS Lookup Plugins

Once we started linking multiple CloudFormation stacks together with Ansible we started to feel the need to query Amazon Web Services for both the output values from existing CloudFormation stacks and certain other values, such as security group IDs and Elasticache Replication Group Endpoints. We found that the quickest and easiest way to gather this information was with a handful of Ansible Lookup Plugins.

I've put the code for the more generic Ansible AWS Lookup Plugins on github and even if you're an Ansible user who's not using AWS they are worth a look just to see how easy it is to write one.

In order to use these lookup plugins you'll want to configure both your default AWS credentials and, unless you want to keep the plugins alongside your playbooks, your lookup plugins path in your Ansible config.

First we configure the credentials for boto, the underlying AWS library used by Ansible.

cat ~/.aws/credentials
aws_access_key_id = 
aws_secret_access_key =

Then we can tell ansible where to find the plugins themselves.

cat ~/.ansible.cfg

lookup_plugins = /path/to/git/checkout/cloudformations/ansible-plugins/lookup_plugins

And lastly we can test that everything is working correctly

$ cat region-test.playbook 
- hosts: localhost
  connection: local
  gather_facts: False

  - shell: echo region is =={{ item }}==
    with_items: lookup('aws_regions').split(',')

# and then run the playbook
$ ansible-playbook -i hosts region-test.playbook

Now you've seen how easy it is, go write your own!

Posted: 2014/07/23 17:55 | /tools/ansible | Permanent link to this entry

Tue, 25 Mar 2014

Managing CloudFormation Stacks with Ansible

Constructing a large, multiple application, virtual datacenter with CloudFormation can quickly lead to a sprawl of different stacks. The desire to split things sensibly, delegate control of separate tiers and loosely couple as many components as possible can lead to a large number of stacks, lots of which need values from stacks created earlier in the run order. While it's possible to do this with the native AWS CloudFormation command line tools, or even some clever bash (or Cumulus), having a strong, higher level tool can make life a lot easier and reproducible. In this post I'll show one possible way to manage interrelated stacks using Ansible.

We won't be delving into the individual templates used in this example. If you're having this kind of issue with CloudFormation then you probably have more than enough of your own to use as examples. Instead, I'll show a basic Ansible playbook for managing three related stacks.

- hosts: localhost
  connection: local
  gather_facts: False
    stack_name: dswtest
    region: eu-west-1
    owner: dwilson
    ami_id: ami-n0tr34l
    keyname: key-64
    snsdest: test@example.org

The first part of our playbook should be familiar to most Ansible users. We set up where to run the playbook, how to connect and ensure we don't spend time gathering facts. We then define the variables that we'll be using as parameters to a number of stacks. The ability to specify literals in a single place was the first benefit I saw when converting a project to Ansible. This may not sound like a major win but being able to change the AMI ID in a single place, or even store it in an external file that our build system can automatically update, is something I'd find difficult to give up.

Now we'll move to the first of our Ansible tasks, a CloudFormation stack represented as a single Ansible resource. The underlying template creates a basic SNS resource we'll later use in all our auto-scaling groups.

  - name: Add SNS topic
    action: cloudformation
      stack_name={{ stack_name }}-sns-email-topic
        AutoScaleSNSTopic: "{{snsdest}}"
    register: asgsns

The 'args:' section contains the values we want to pass in to the template. Here we're only passing a single value that we defined earlier in the 'vars:' section. We'll see more complicated examples of this later. We also register the output from the CloudFormation action. This includes any values we specify as "Outputs" in the template and provides a nice way to deliberately define what we're exposing from our template. The alternative is to pull out arbitrary values from a given resource created in a previous stack but that's a hefty breach of encapsulation and will often bite you later when the templates change.

The Create Security Groups CloudFormation task doesn't really have anything interesting from an Ansible perspective, we run it, create the repos and gather the outputs using 'register' for use in our next template.

  - name: Create Security Groups
    action: cloudformation
      stack_name={{ stack_name }}-security-groups
    register: secgrp

The 'Create Webapp' example below shows most of the basic CloudFormation resource features in a single task. We use variables defined at the start of the playbook to reduce duplication of literal strings. We prefix the stack names to allow multiple developers to each build full sets of stacks without duplicate stack name conflicts while keeping grouping simple in the AWS web dashboard.

  - name: Create Webapp
    action: cloudformation
      stack_name={{ stack_name }}-webapp
        Owner: "{{ owner }}"
        AMIId: "{{ ami_id }}"
        KeyName: "{{ keyname }}"
        AppServerFleetSize: 1
        ASGSNSArn:             "{{ asgsns['stack_outputs']['EmailSNSTopicARN']      }}"
        WebappSGID:            "{{ secgrp['stack_outputs']['WebappSGID']            }}"
        ElasticacheClientSGID: "{{ secgrp['stack_outputs']['ElasticacheClientSGID'] }}"

In the args section we also use the return values from our previous stacks. The nested value access is a little verbose but it's easy to pickup and being able to see all the possible values when running Ansible under debug mode makes things a lot easier. We also had the need to pull down output values from stacks created outside of Ansible, so I wrote a simple Ansible CloudFormation lookup plugin.

So what does Ansible gain us as a stack management tool? In terms of raw CloudFormation it provides a nice way to remove boilerplate literals from each stack and define them once in the 'vars' section. The ability to register the output from a stack and then use it later on is an essential one for this kind of stack building and retrieving existing values as a pythonish hash is much easier than doing it on the command line. As for added power, it should be easier to implement AWS functionality that's currently missing from CloudFormation as an Ansible module than a CloudFormation external resource (although more on that when I actually write one) and performing other out of band tasks, letting your ticketing system know about a new stack for example, is a lot easier to integrate into Ansible than trying to wrap the cli tools manually.

I've been using Ansible for stack management in a project that involves over a dozen separate moving parts for the last month and so far it's been working fine with minimal pain.

Posted: 2014/03/25 23:38 | /tools | Permanent link to this entry

Sat, 22 Mar 2014

Project Book Pages

I've been doing my usual quarterly sweep of the always too full bookshelves and hit the usual dilemma of what to keep, what to donate to charity and what to recycle. Among the technical books in this batch is the 'Sendmail Cookbook', something I've always kept as a good luck charm to ward off the evil of needing to work with mail servers with m4 based configuration languages.

Sendmail is one of those projects that I've not kept up with over the years. I have no idea how much has changed since the book was published over a decade ago, 2003 in this case, so I don't know if this is a useful book to pass on or if it's dangerously out of date and should be removed from circulation. It'd be handy if the larger projects maintained a page of books related to the project and a table of how relevant the material is in relation to different versions.

This would not only help me prune my shelves of older, now out of date books, but would help people new to a project pick books that were still relevant for the versions they need to learn.

Posted: 2014/03/22 15:30 | /books | Permanent link to this entry

Mon, 17 Mar 2014

Managing CloudFormation Stacks With Cumulus

Working with multiple, related CloudFormation stacks can become quite taxing if you only use the native AWS command line tools. Commands start off gently -

cfn-create-stack dwilson-megavpc-sns-emails --parameters "AutoScaleSNSTopic=testy@example.org" \ 
  --template-file location/sns-email-topic.json

- but they quickly become painful. The two commands below each create stacks that depend on values from resources that have been defined in a previous stack. You can spot these values by their unfriendly appearance, such as 'rtb-9n0tr34lac55' and 'subnet-e4n0tr34la'.

# Add the bastion hosts template

cfn-create-stack dwilson-megavpc-bastionhosts --parameters \ 
KeyName=dwilson; \
" --template-file bastion.json

# create the web/app servers
cfn-create-stack dwilson-megavpc-webapps --parameters \
" --template-file location/webapps.json

When building a large, multi-tier VPC you'll often find yourself needing to extract output values from existing stacks and pass them in as parameters to dependent stacks. This results in a lot of repeated literal strings and boilerplate in your commands and will soon cause you to start doubting your approach.

The real pain came for us when we started adding extra availability zones for resilience. A couple of my co-workers were keeping their stuff running with bash and python + boto but the code bases were starting to get a little creaky and complicated and this seemed like a problem that should have already been solved in a nice, declarative way. It was about the point when we decided to add an extra subnet to a number of tiers that I caved and went trawling through github for somebody else's solution. After some investigation I settled on Cumulus as the first project to experiment with as a replacement for our ever growing, hand hacked, creation scripts. To pay Cumulus the proper respect it did make life a lot easier at first.

The code snippets below show an example set of stacks that were converted over from raw command lines like the above to Cumulus yaml based configs. First up we have the base declaration and a simple stack definition.

  region: eu-west-1

      cf_template: sns-email-topic.json
          value: testymctest@example.org

Each of the keys under 'stacks:' will be created as a separate CloudFormation stack by cumulus. Their names will be prefixed with 'locdsw', taken from the first line of our example, and they'll be placed inside the 'eu-west-1' region. The configuration above will result in the creation of a stack called 'locdsw-sns-email-topic' appearing in the CloudFormation dashboard

The stacks resources are defined in the template specified in cf_template. Our example does not depend on existing stacks and takes a single parameter, AutoScaleSNSTopic, with a value of 'testymctest'. Cumulus has no support for variables so you'll find yourself repeating certain parameters, like ami id and key id, throughout the configuration.

For a while we had an internal branch that treated the CloudFormation templates as jinja2 templates. This enabled us to remove large amounts of duplication inside individual templates. These changes were submitted upstream but one of the goals of the Cumulus project is that the templates it manages can still be used by the native CloudFormation tools, so the patch was (quite fairly) rejected.

Let's move on to the second stack defined in our config. The point of interest here is the addition of an explicit dependency on the sns-email-topic stack. Note that it's not referred to using the prefixed name, which can be a point of confusion for new users.

      cf_template: security-groups.json
        - sns-email-topic

Finally we move on to an example declaration of a larger stack. The interesting parts of which are in the params section.

      cf_template: webapp.json
        - sns-email-topic
        - security-groups
          value: 1
          value: dwilson
          value: ami-n0tr34l
          value: dwilson
          source: sns-email-topic
          type: output
          variable: EmailSNSTopicARN
          source: security-groups
          type: output
          variable: WebappSGID

The webapp params section contains two different types of values. Simple ones we've seen before, 'Owner' and 'AMIId' for example, and composite ones that reference values that other stacks define as outputs. Let's look at ASGSNSArn in a little more detail.

    source: sns-email-topic
    type: output
    variable: EmailSNSTopicARN

Here, inside the webapp stack declaration, we look up a value defined in the output of the previously executed sns-email-topic template. From the CloudFormation Outputs for that template we retrieve the value of EmailSNSTopicARN. We then pass this to the webapp.json template as the ASGSNSArn parameter on stack creation. If you need to pull a parameter in from an existing stack that was created in some other way you can specify it as 'source: -fullstackname'. The '-' makes it an absolute name lookup, cumulus won't prefix the stackname with locdsw for example.

Cumulus met a number of my stack management needs, and I'm still using it for older, longer lived stacks such as monitoring, but because of its narrow focus it began to feel restricting quite quickly. I've started to investigate Ansible as a possible replacement as it's a more generic tool and I'm in need of flexibility that'd feel quite out of place in cumulus.

In terms of day to day operations the main issues we hit included the need to turn on ALL the debug, both cumulus and boto, to see why stack creations failed. A lot of the AWS returned errors were being caught and replaced by generic, unhelpful error messages at any filter level greater than debug. Running under debug results in a LOT of output, especially when boto is idle polling, waiting for the stack creation to complete so it can begin the next one. The lack of any variables or looping was also an early constraint. The answers to this seemed to include pushing the complexity down to the templates and writing large mapping sections, increasing duplication of literals between templates and a lot of FN::FindInMaps maps. The second approach was to have multiple configs. This was less than ideal due to the number of permutations, environment (dev, stage, live), region and in development which developer was using it. The third option, a small pre-processor that expanded embedded jinja2 in to a CloudFormation template, added another layer between writing and debugging and so didn't last very long.

If you're running a small number of simple templates then Cumulus might be the one tool you need. For us, Ansible seems to be a better fit, but more about that in the next post.

Posted: 2014/03/17 20:01 | /tools | Permanent link to this entry

books career cloud events firefox geekstuff linux meta misctech movies nottech perl programming security sites sysadmin tools tools/ansible tools/commandline tools/gui tools/network tools/online tools/puppet unixdaemon

Copyright © 2000-2014 Dean Wilson :: RSS Feed