UNIXDAEMON Small Mosaic

Categories:

/books
/career
/cloud
/events
/firefox
/geekstuff
/linux
/meta
/misctech
/movies
/nottech
/perl
/programming
/security
/sites
/sysadmin
/tools
/tools/ansible
/tools/commandline
/tools/gui
/tools/network
/tools/online
/tools/puppet
/unixdaemon

Archives:

October 20143
September 20142
July 20141
March 20146
February 20141
January 20145
December 20135
November 20132
October 20134
August 20134
July 20132
June 20131
Full Archives


Thu, 09 Oct 2014

Facter: Ansible facts in Puppet

Have you ever needed to access Ansible facts from inside Puppet? well, if you ever need to, you can use the basic ansible_facts custom fact.



    # make sure you have ansible installed
    $ sudo puppet resource package ansible ensure=present

    # clone my experimental puppet fact repo
    $ git clone https://github.com/deanwilson/unixdaemon-puppet_facts.git

    # Try running the fact
    $ ruby -I unixdaemon-puppet_facts/lib/ `which facter` ansible_facts -j
    {
      "ansible_facts": {
        "ansible_architecture": "x86_64",
        "ansible_bios_date": "04/25/2013",
        "ansible_bios_version": "RKPPT%$DSFH.86A.0457.20DF3.0425.1251",
        "ansible_cmdline": {
        ... snip ...


While it's nice to see the output in facter you need to make a small change to your config file to use them in puppet. Set stringify_facts = false in the [main] section of your puppet.conf file and you can use these new facts inside your manifests.



    $ puppet apply -e 'notify { "Form factor: ${::ansible_facts['ansible_form_factor']}": }'
    Notice: Form factor: Desktop


Would I use this in general production? No, never again, but it's a nice reminder of how easy facter is to extend. A couple of notes if you decide to play with this fact - I deliberately filter out non-ansible facts. There was something odd about seeing facter facts nested inside Ansible ones inside facter. If you foolishly decide to use this heavily, and you're running puppet frequently, adding a simple cache for the ansible results might be worth looking at to help your performance.

Posted: 2014/10/09 20:34 | /tools/puppet | Permanent link to this entry


Puppet 3.7 File Function Improvements

Puppet's always had a couple of little inconsistencies when it comes to the file and template functions. The file function has always been able to search for multiple files and return the contents of the first file found but it required absolute paths. The template function accepts module based paths but doesn't allow for matching on the first found file. Although this can be fixed with the Puppet Multiple Template Source Function.

One of the little niceties that came with Puppet 3.7 is an easily missed improvement to the file function that makes using it easier and more consistent with the template function. In earlier puppet versions you called file with absolute paths, like this:



  file { '/tmp/fakefile':
    content => file('/etc/puppet/modules/yourmodulename/files/fakefile')
  }


Thanks to a code submission from Daniel Thornton (which fixes an issue that's been logged since at least 2009) you can now call the file function in the same way as you'd use template, while retaining support for matching the first found file.



  file { '/tmp/fakefile':
    content => file('yourmodulename/fakefile')
  }

  # or

  file { '/tmp/fakefile':
    content => file("yourmodulename/fakefile.${::hostname}", 'yourmodulename/fakefile')
  }


Although most puppet releases come with a couple of 'wow' features sometimes it's the little ones like this that adds consistency to the platform and helps cleanup and abstract your modules, that you appreciate more in the long term.

Posted: 2014/10/09 17:07 | /tools/puppet | Permanent link to this entry


Sat, 04 Oct 2014

Puppet Lint Custom Checks

In the past if you wanted to run your own puppet-lint checks there was no official, really clean way to distribute them outside of the core code. Now, with the 1.0 release of puppet-lint you can write your own, external, puppet-lint checks and make them easily distributable.

I spent a little bit of time this morning reading through the existing 3rd party community plugins and after porting a private absolute template path check over to the new system I have to say that rodjek has done an excellent job with both the ease of writing your own checks and the quality of the developer tutorial. if you have any local style rules then now's a great time to get them represented in your puppet-lint runs.

Posted: 2014/10/04 10:29 | /tools/puppet | Permanent link to this entry


Thu, 11 Sep 2014

Puppet Certified Professional 2014 Exam

A little while ago in a twitter conversation, many hops away a few of us discussed the Puppet Certified Professional exam and topic coverage. Specifically how much of it was focused on Puppet Enterprise (PE) and if it would either dissuade users of purely FOSS Puppet or heavily impact their chance of passing if they'd never used PE.

While I stand by my views I began to worry that my knowledge of the syllabus was only based on hearsay, the practice exam questions and that I was being overly harsh and possibly spreading misinformation through my own ignorance. So I booked a place and took the exam a couple of days later.

The exam is multiple choice and most questions are quite direct. While there were tricky questions I only encountered one that could be either a very subtle trick or a mistake, and I've reported that upstream and received a positive response about it being investigated. The questions I had heavily pointed towards topics that you'd have to use puppet on a semi-regular basis to know the answers to.

In terms of candidate preparation, other than the obvious choice of taking PuppetLabs training courses, I think that being comfortable with all the material in Pro Puppet and having a decent six-twelve months of hands on experience of Puppet, MCollective and Puppet DB will cover most of the scope. This also requires knowing how puppet fits together and understanding how it works, not just being able to write modules and work with the DSL. In hindsight I'd have scored higher by downloading the Puppet Enterprise VM and spending a few hours working through the GUI features. Instead I went in having never used PE and still had a decent pass. I'd also note that the practice questions mentioned above are an accurate illustration of the real exam questions format and difficulty.

As I've only just taken the exam, and the fact I have more than enough puppet experience on my CV already, I don't think the cert will add my to my employability but for people with less years of puppet and looking to validate their skills it's not a bad way to spend an hour. Doubly so if you can take the test for free at a local puppetcamp; in case you needed any more reasons to attend one.

Posted: 2014/09/11 12:36 | /tools/puppet | Permanent link to this entry


Simplifying an Online Presence

It is amazing how many small commitments and fragments of an online presence you can collect over years of being involved in different projects and user groups. I've ended up hosting planets, user group sites, submission forms (and other scripts), managing twitter announcement accounts, pushing tar balls (don't ask) and running (and owning) more domains than I could ever really want or do anything useful with. After an initial audit of how difficult it'd be to move some of my public servers I've realised that something has to change.

I've decided to take a deliberate step back and reduce my involvement in a number of projects, and my general online footprint, to levels that are comfortable and maintainable while leaving me enough time to get involved in some newer projects, technology and groups that are relevant to me. Although I slowly began the cleaning process a few months ago, initially by transferring domains and in some cases even deleting websites and removing their DNS, there's still quite a lot of cruft to trim.

Like most full time sysadmins my personal systems, which thanks to Debian and Bytemark have been in use for many years and in place release upgrades, are a lot more disorderly, and manual, than I'd accept at work or even in my home lab. A clean up like this seems to be the perfect time to move to newer, more appropriate, platforms like nginx and puppet modules (yes I have puppet code that predates modules) and replace custom nagios wrapping with serverspec and such. Some of the evolved configurations with dozens of complicated edge cases are going to be difficult to migrate and I'm trying to bring myself to just kill a number of them, even if it leaves certain links now dead. This site (unixdaemon.net) will probably be one of the biggest victims of this.

What have I learned from this audit and clean up? First, don't make open ended commitments. As an example I run one site for a group that I've not even attended for over 6 years. Secondly I no longer have the free time I once did and so it has to count for more. I need to get more proactive about handing things off that I'm no longer passionate about.

Posted: 2014/09/11 00:24 | /unixdaemon | Permanent link to this entry


Wed, 23 Jul 2014

Ansible AWS Lookup Plugins

Once we started linking multiple CloudFormation stacks together with Ansible we started to feel the need to query Amazon Web Services for both the output values from existing CloudFormation stacks and certain other values, such as security group IDs and Elasticache Replication Group Endpoints. We found that the quickest and easiest way to gather this information was with a handful of Ansible Lookup Plugins.

I've put the code for the more generic Ansible AWS Lookup Plugins on github and even if you're an Ansible user who's not using AWS they are worth a look just to see how easy it is to write one.

In order to use these lookup plugins you'll want to configure both your default AWS credentials and, unless you want to keep the plugins alongside your playbooks, your lookup plugins path in your Ansible config.

First we configure the credentials for boto, the underlying AWS library used by Ansible.



cat ~/.aws/credentials
[default]
aws_access_key_id = 
aws_secret_access_key =


Then we can tell ansible where to find the plugins themselves.



cat ~/.ansible.cfg

[defaults]
...
lookup_plugins = /path/to/git/checkout/cloudformations/ansible-plugins/lookup_plugins


And lastly we can test that everything is working correctly



$ cat region-test.playbook 
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:
  - shell: echo region is =={{ item }}==
    with_items: lookup('aws_regions').split(',')

# and then run the playbook
$ ansible-playbook -i hosts region-test.playbook

Now you've seen how easy it is, go write your own!

Posted: 2014/07/23 17:55 | /tools/ansible | Permanent link to this entry


Tue, 25 Mar 2014

Managing CloudFormation Stacks with Ansible

Constructing a large, multiple application, virtual datacenter with CloudFormation can quickly lead to a sprawl of different stacks. The desire to split things sensibly, delegate control of separate tiers and loosely couple as many components as possible can lead to a large number of stacks, lots of which need values from stacks created earlier in the run order. While it's possible to do this with the native AWS CloudFormation command line tools, or even some clever bash (or Cumulus), having a strong, higher level tool can make life a lot easier and reproducible. In this post I'll show one possible way to manage interrelated stacks using Ansible.

We won't be delving into the individual templates used in this example. If you're having this kind of issue with CloudFormation then you probably have more than enough of your own to use as examples. Instead, I'll show a basic Ansible playbook for managing three related stacks.


---
- hosts: localhost
  connection: local
  gather_facts: False
  vars:
    stack_name: dswtest
    region: eu-west-1
    owner: dwilson
    ami_id: ami-n0tr34l
    keyname: key-64
    snsdest: test@example.org

The first part of our playbook should be familiar to most Ansible users. We set up where to run the playbook, how to connect and ensure we don't spend time gathering facts. We then define the variables that we'll be using as parameters to a number of stacks. The ability to specify literals in a single place was the first benefit I saw when converting a project to Ansible. This may not sound like a major win but being able to change the AMI ID in a single place, or even store it in an external file that our build system can automatically update, is something I'd find difficult to give up.

Now we'll move to the first of our Ansible tasks, a CloudFormation stack represented as a single Ansible resource. The underlying template creates a basic SNS resource we'll later use in all our auto-scaling groups.



  tasks:
  - name: Add SNS topic
    action: cloudformation
      stack_name={{ stack_name }}-sns-email-topic
      state=present
      region="{{region}}"
      template=sns-email-topic.json
    args:
      template_parameters:
        AutoScaleSNSTopic: "{{snsdest}}"
    register: asgsns

The 'args:' section contains the values we want to pass in to the template. Here we're only passing a single value that we defined earlier in the 'vars:' section. We'll see more complicated examples of this later. We also register the output from the CloudFormation action. This includes any values we specify as "Outputs" in the template and provides a nice way to deliberately define what we're exposing from our template. The alternative is to pull out arbitrary values from a given resource created in a previous stack but that's a hefty breach of encapsulation and will often bite you later when the templates change.

The Create Security Groups CloudFormation task doesn't really have anything interesting from an Ansible perspective, we run it, create the repos and gather the outputs using 'register' for use in our next template.



  - name: Create Security Groups
    action: cloudformation
      stack_name={{ stack_name }}-security-groups
      state=present
      region="{{region}}"
      template=security-groups.json
    register: secgrp

The 'Create Webapp' example below shows most of the basic CloudFormation resource features in a single task. We use variables defined at the start of the playbook to reduce duplication of literal strings. We prefix the stack names to allow multiple developers to each build full sets of stacks without duplicate stack name conflicts while keeping grouping simple in the AWS web dashboard.



  - name: Create Webapp
    action: cloudformation
      stack_name={{ stack_name }}-webapp
      state=present
      region="{{region}}"
      template=-webapp.json
    args:
      template_parameters:
        Owner: "{{ owner }}"
        AMIId: "{{ ami_id }}"
        KeyName: "{{ keyname }}"
        AppServerFleetSize: 1
        ASGSNSArn:             "{{ asgsns['stack_outputs']['EmailSNSTopicARN']      }}"
        WebappSGID:            "{{ secgrp['stack_outputs']['WebappSGID']            }}"
        ElasticacheClientSGID: "{{ secgrp['stack_outputs']['ElasticacheClientSGID'] }}"

In the args section we also use the return values from our previous stacks. The nested value access is a little verbose but it's easy to pickup and being able to see all the possible values when running Ansible under debug mode makes things a lot easier. We also had the need to pull down output values from stacks created outside of Ansible, so I wrote a simple Ansible CloudFormation lookup plugin.

So what does Ansible gain us as a stack management tool? In terms of raw CloudFormation it provides a nice way to remove boilerplate literals from each stack and define them once in the 'vars' section. The ability to register the output from a stack and then use it later on is an essential one for this kind of stack building and retrieving existing values as a pythonish hash is much easier than doing it on the command line. As for added power, it should be easier to implement AWS functionality that's currently missing from CloudFormation as an Ansible module than a CloudFormation external resource (although more on that when I actually write one) and performing other out of band tasks, letting your ticketing system know about a new stack for example, is a lot easier to integrate into Ansible than trying to wrap the cli tools manually.

I've been using Ansible for stack management in a project that involves over a dozen separate moving parts for the last month and so far it's been working fine with minimal pain.

Posted: 2014/03/25 23:38 | /tools | Permanent link to this entry


Sat, 22 Mar 2014

Project Book Pages

I've been doing my usual quarterly sweep of the always too full bookshelves and hit the usual dilemma of what to keep, what to donate to charity and what to recycle. Among the technical books in this batch is the 'Sendmail Cookbook', something I've always kept as a good luck charm to ward off the evil of needing to work with mail servers with m4 based configuration languages.

Sendmail is one of those projects that I've not kept up with over the years. I have no idea how much has changed since the book was published over a decade ago, 2003 in this case, so I don't know if this is a useful book to pass on or if it's dangerously out of date and should be removed from circulation. It'd be handy if the larger projects maintained a page of books related to the project and a table of how relevant the material is in relation to different versions.

This would not only help me prune my shelves of older, now out of date books, but would help people new to a project pick books that were still relevant for the versions they need to learn.

Posted: 2014/03/22 15:30 | /books | Permanent link to this entry


Mon, 17 Mar 2014

Managing CloudFormation Stacks With Cumulus

Working with multiple, related CloudFormation stacks can become quite taxing if you only use the native AWS command line tools. Commands start off gently -



cfn-create-stack dwilson-megavpc-sns-emails --parameters "AutoScaleSNSTopic=testy@example.org" \ 
  --template-file location/sns-email-topic.json


- but they quickly become painful. The two commands below each create stacks that depend on values from resources that have been defined in a previous stack. You can spot these values by their unfriendly appearance, such as 'rtb-9n0tr34lac55' and 'subnet-e4n0tr34la'.



# Add the bastion hosts template

cfn-create-stack dwilson-megavpc-bastionhosts --parameters \ 
"
VPC=vpc-n0tr34l;BastionServerSNSArn=arn:aws:sns:us-east-1:14204989:fooo;
PrivateNATRouteTableAZ1=rtb-9n0tr34lac55;PrivateNATRouteTableAZ2=rtb-6n0tr34l5e0a;
PublicSubnetAZ1=subnet-3n0tr34l5c;PublicSubnetAZ2=subnet-e4n0tr34la;
KeyName=dwilson; \
" --template-file bastion.json


# create the web/app servers
cfn-create-stack dwilson-megavpc-webapps --parameters \
"
VPC=vpc-65n0tr34lb;BastionserverSG=sg-n0tr34l;PrivateNATRouteTableAZ1=rtb-n0tr34l;
PrivateNATRouteTableAZ2=rtb-n0tr34l;PublicSubnetAZ1=subnet-n0tr34l;
PublicSubnetAZ2=subnet-n0tr34l;KeyName=dwilson;WebServerSNSArn=arn:aws:sns:us-east-1:14:fooo
" --template-file location/webapps.json


When building a large, multi-tier VPC you'll often find yourself needing to extract output values from existing stacks and pass them in as parameters to dependent stacks. This results in a lot of repeated literal strings and boilerplate in your commands and will soon cause you to start doubting your approach.

The real pain came for us when we started adding extra availability zones for resilience. A couple of my co-workers were keeping their stuff running with bash and python + boto but the code bases were starting to get a little creaky and complicated and this seemed like a problem that should have already been solved in a nice, declarative way. It was about the point when we decided to add an extra subnet to a number of tiers that I caved and went trawling through github for somebody else's solution. After some investigation I settled on Cumulus as the first project to experiment with as a replacement for our ever growing, hand hacked, creation scripts. To pay Cumulus the proper respect it did make life a lot easier at first.

The code snippets below show an example set of stacks that were converted over from raw command lines like the above to Cumulus yaml based configs. First up we have the base declaration and a simple stack definition.



locdsw:
  region: eu-west-1

  stacks:
    sns-email-topic:
      cf_template: sns-email-topic.json
      depends:
      params:
        AutoScaleSNSTopic:
          value: testymctest@example.org

Each of the keys under 'stacks:' will be created as a separate CloudFormation stack by cumulus. Their names will be prefixed with 'locdsw', taken from the first line of our example, and they'll be placed inside the 'eu-west-1' region. The configuration above will result in the creation of a stack called 'locdsw-sns-email-topic' appearing in the CloudFormation dashboard

The stacks resources are defined in the template specified in cf_template. Our example does not depend on existing stacks and takes a single parameter, AutoScaleSNSTopic, with a value of 'testymctest'. Cumulus has no support for variables so you'll find yourself repeating certain parameters, like ami id and key id, throughout the configuration.

For a while we had an internal branch that treated the CloudFormation templates as jinja2 templates. This enabled us to remove large amounts of duplication inside individual templates. These changes were submitted upstream but one of the goals of the Cumulus project is that the templates it manages can still be used by the native CloudFormation tools, so the patch was (quite fairly) rejected.

Let's move on to the second stack defined in our config. The point of interest here is the addition of an explicit dependency on the sns-email-topic stack. Note that it's not referred to using the prefixed name, which can be a point of confusion for new users.



    security-groups:
      cf_template: security-groups.json
      depends:
        - sns-email-topic

Finally we move on to an example declaration of a larger stack. The interesting parts of which are in the params section.



    webapp:
      cf_template: webapp.json
      depends:
        - sns-email-topic
        - security-groups
      params:
        AppServerFleetSize:
          value: 1
        Owner:
          value: dwilson
        AMIId:
          value: ami-n0tr34l
        KeyName:
          value: dwilson
        ASGSNSArn:
          source: sns-email-topic
          type: output
          variable: EmailSNSTopicARN
        WebappSGID:
          source: security-groups
          type: output
          variable: WebappSGID

The webapp params section contains two different types of values. Simple ones we've seen before, 'Owner' and 'AMIId' for example, and composite ones that reference values that other stacks define as outputs. Let's look at ASGSNSArn in a little more detail.



  ASGSNSArn:
    source: sns-email-topic
    type: output
    variable: EmailSNSTopicARN

Here, inside the webapp stack declaration, we look up a value defined in the output of the previously executed sns-email-topic template. From the CloudFormation Outputs for that template we retrieve the value of EmailSNSTopicARN. We then pass this to the webapp.json template as the ASGSNSArn parameter on stack creation. If you need to pull a parameter in from an existing stack that was created in some other way you can specify it as 'source: -fullstackname'. The '-' makes it an absolute name lookup, cumulus won't prefix the stackname with locdsw for example.

Cumulus met a number of my stack management needs, and I'm still using it for older, longer lived stacks such as monitoring, but because of its narrow focus it began to feel restricting quite quickly. I've started to investigate Ansible as a possible replacement as it's a more generic tool and I'm in need of flexibility that'd feel quite out of place in cumulus.

In terms of day to day operations the main issues we hit included the need to turn on ALL the debug, both cumulus and boto, to see why stack creations failed. A lot of the AWS returned errors were being caught and replaced by generic, unhelpful error messages at any filter level greater than debug. Running under debug results in a LOT of output, especially when boto is idle polling, waiting for the stack creation to complete so it can begin the next one. The lack of any variables or looping was also an early constraint. The answers to this seemed to include pushing the complexity down to the templates and writing large mapping sections, increasing duplication of literals between templates and a lot of FN::FindInMaps maps. The second approach was to have multiple configs. This was less than ideal due to the number of permutations, environment (dev, stage, live), region and in development which developer was using it. The third option, a small pre-processor that expanded embedded jinja2 in to a CloudFormation template, added another layer between writing and debugging and so didn't last very long.

If you're running a small number of simple templates then Cumulus might be the one tool you need. For us, Ansible seems to be a better fit, but more about that in the next post.

Posted: 2014/03/17 20:01 | /tools | Permanent link to this entry


Tue, 04 Mar 2014

Abstracting CloudFormation IAM with Nested Stacks

Once we started extracting applications into different logical CloudFormation stacks and physical templates, we began to notice quite a lot of duplication in our json when it came to declaring IAM rules. Some of our projects store their puppet, hiera and rpm files in restricted S3 buckets so allowing stacks access to them based upon environment, region, stack name and other criteria quickly becomes quite long-winded. After looking at a couple of dozen application templates and finding that over 30% of the json was IAM based it was time to find a different approach.

One of the CloudFormation techniques I'd seen mentioned but never used before was nested CloudFormation stacks. This allows you to define an entire stack as just another resource in your template. Here's some example json that does this:



  "Resources" : {

    "IAMRolesStack" : {
      "Type" : "AWS::CloudFormation::Stack",
      "Properties" : {
        "TemplateURL" : "https://s3-eu-west-1.amazonaws.com/my-iam-rules/projectname/iam-roles-20140301.json",
        "Parameters" : {
          "Stack": "testy-webapp",
          "Type":  "webapp",
          "App":   "tinyess",
          "Env":   { "Ref" : "DeploymentEnvironment" }
        }
      }
    }

  }

You can see that a stack is declared in the same manner as all other resources. The 'TemplateURL' property must point to a URL that hosts a complete, valid CloudFormation template. This allows you to develop the nested stack in the same way as you'd progress your actual application templates and test it in isolation. For my experiments I found it easiest to store them in S3 under a basic hierarchy with a little versioning to allow multiple versions of the IAM rules to be in use at once across the stacks. The other properties in the example are 'Parameters'. These are passed to the sub-stack at creation time as actual parameters and are what makes this approach so flexible and powerful.

Inside the nested stack template we add define a AWS::IAM::Role, an AWS::IAM::InstanceProfile and a number of AWS::IAM::Policy types that are abstracted to only allow access for one app/environment combination at a time. We do this using the parameters we pass in as values at different levels of the hierarchy. This way we can ensure that every application using a specific version of the IAM roles gets exactly the same permissions while not bulk pasting it into each applications json template or hard coding any of the application specific values. It's also worth noting that as stacks are given "CloudFormationed" IDs that include some randomness you can have multiple versions of the nested stack at once with no overlap or conflicts between apps.

You can see a small extract from our sample IAM template, with the parameters interpolated into the path, here -



  "SecretPolicy": {
    "Type": "AWS::IAM::Policy",
    "Properties": {
      "PolicyDocument": {
        "Statement": [ {
            "Effect": "Allow",
            "Action": [
              "S3:ListBucket"
            ],
            "Resource": [
              "arn:aws:s3:::org.example.test.secrets"
            ]
          },
          {
            "Effect": "Allow",
            "Action": [
              "S3:GetObject"
            ],
            "Resource": [
              "arn:aws:s3:::org.example.test.secrets/common.yaml",
              { "Fn::Join" : [ "", [
                "arn:aws:s3:::org.example.test.secrets/type/",
                { "Ref" : "App"   }, ".",
                { "Ref" : "Type"  }, ".",
                { "Ref" : "Env"   }, ".",
                { "Ref" : "Stack" }, ".yaml"
              ] ] },

              { "Fn::Join" : [ "", [
                "arn:aws:s3:::org.example.test.secrets/type/",
                { "Ref" : "App"  }, ".",
                { "Ref" : "Type" }, ".yaml"
              ] ] },

Now that we've declared and created the nested stack let's use the IamInstanceProfile it created in the auto scaling launch configuration that lives in the containing stack.



    "AppServerFleetLaunchConfig" : {
      "Type" : "AWS::AutoScaling::LaunchConfiguration",
      "Properties" : {
        ...
        "IamInstanceProfile": { "Fn::GetAtt" : [ "IAMRolesStack", "Outputs.InstanceProfile" ] },
        ...
      }
    }

Accessing nested stack outputs is as simple as a call to Fn::GetAtt with the resource name of the nested stack as the first argument (IAMRolesStack as seen in our first code snippet) and the outputs name as part of the second.


So what did we get from this? A few very worth while things. We removed a LOT of boilerplate from all our application templates. This also makes CloudFormation application templates easier to create as only a few people need in-depth knowledge of our IAM rules and bucketing scheme, application templates can focus on the application. It's easier to confirm that applications have the same access rights based on the S3 bucket used, rather than diffing through lots of subtly different IAM resources.

I'm using this technique on a couple of medium size projects at the moment and so far it seems like a good way to overcome IAM json spaghetti with no large drawbacks.

Posted: 2014/03/04 22:38 | /tools | Permanent link to this entry


Sat, 01 Mar 2014

Structured Facts with Facter 2

Structured facts in facter had become the Puppet communities version of 'Duke Nukem Forever', something that's always been just around the next corner. Now that the facter 2.0.1 release candidate is out you can finally get your hands on an early version and do some experimentation.

First we grab a version of facter 2 that supports structured facts from puppetlabs -



 # our play ground
 mkdir /tmp/facter && cd /tmp/facter

 # grab the code
 wget https://downloads.puppetlabs.com/facter/facter-2.0.1-rc1.tar.gz

 cd facter-2.0.1-rc1/

 # check facter runs from our expanded archive
 ruby -I lib bin/facter
  

This is the part where we can be underwhelmed, it's all still flat. Don't let the lack of nested facts dishearten you though. The Puppetlabs people have done all the hard work of implementing structured facts support, they've just not converted any showcase facts over yet. Instead of waiting for an official structured fact lets add our own and have a little play.

As we're experimenting with a throw away environment we'll drop the structured fact directly in to our expanded archive. In a real environment you'd never do this, you'd either use FACTERLIB or deploy your modules properly with puppet as Luke intended.



 # install the plugin
 wget https://raw.github.com/deanwilson/unixdaemon-puppet_facts/master/lib/facter/yumplugins.rb -O lib/facter/yumplugins.rb

 # and run it
 ruby -I lib bin/facter  yumplugins

 pluginblacklistlangpacksprestorefresh-packagekitwhiteoutdisabledblacklistwhiteoutenabledlangpacksprestorefresh-packagekit

Well, our first TODO will be to determine how to show structured facts as strings, but we'll defer that for now as we really want to see some deep nesting. Assuming you're on a RedHat osfamily host you can run facter with the yaml output, otherwise you'll have to settle for the sample outputs below:



 $ ruby -I lib bin/facter yumplugins --yaml

--- 
yumplugins: 
  plugin: 
  - blacklist
  - langpacks
  - presto
  - refresh-packagekit
  - whiteout
  enabled: 
  - langpacks
  - presto
  - refresh-packagekit
  disabled: 
  - blacklist
  - whiteout

  # and now try it as json

 $ ruby -I lib ./bin/facter yumplugins -j
{
  "yumplugins": {
    "plugin": [
      "blacklist",
      "langpacks",
      "presto",
      "refresh-packagekit",
      "whiteout"
    ],
    "enabled": [
      "langpacks",
      "presto",
      "refresh-packagekit"
    ],
    "disabled": [
      "blacklist",
      "whiteout"
    ]
  }
}

Success! Structured fact output! From (nearly) Puppet! Of course, this is only a release candidate for Facter 2 so we're not production ready yet but as a taster of what's coming and a way to get ahead and start converting your own facts it's a lovely, and amazingly overdue, gift.

As for writing structured facts, as you can see from my structured yumplugins fact example there's no difference between a structured and an unstructured one apart from the value it returns.

Posted: 2014/03/01 16:04 | /tools/puppet | Permanent link to this entry


Automatic CloudFormation Template Validation with Guard::CloudFormation

One of the nice little conveniences I've started to use in my daily work with Amazon Webservices CloudFormation is the Guard::CloudFormation ruby gem.

The Guard gem "is a command line tool to easily handle events on file system modifications" which, simply put, means "run a command when a file changes". While I've used a number of different little tools to do this in the past, Guard presents a promising base to build more specific test executors on so I've started to integrate it in to more aspects of my work flow. In this example I'm going to show you how to validate a CloudFormation template each time you save a change to it.

The example below assumes that you already have the AWS CloudFormation command line tools installed, configured and available on your path.



# our example, one resource, template.

$ cat example-sns-email-topic.json 
{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description" : "Creates an example SNS topic",

  "Parameters" : {
    "AutoScaleSNSTopic": {
      "Description": "Email address for notifications.",
      "Type": "String",
      "Default": "testymctest@example.org"
    }
  },

  "Resources" : {
    "EmailSNSTopic": {
      "Type" : "AWS::SNS::Topic",
      "Properties" : {
        "DisplayName" : "Autoscaling notifications for Location service",
        "Subscription": [ {
          "Endpoint" : { "Ref" : "AutoScaleSNSTopic" },
          "Protocol" : "email"
        } ]
      }
    }
  }
}

# now, do a manual run to ensure all the basics are working

$ cfn-validate-template --template-file example-sns-email-topic.json
PARAMETERS  AutoScaleSNSTopic  testymctest@example.org  false  Email address for notifications.


Now that we have confirmed our CloudFormation tools are working we can sprinkle some automation all over it.



# install the required gems

gem install guard guard-cloudformation

# and then create a basic Guardfile
# in this case we watch all .json files in the current directory

cat << 'EOC' > Guardfile

guard "cloudformation", :templates_path => ".", :all_on_start => false do
  watch(%r{.+\.json$})
end

EOC

# run guard

$ guard 
10:07:49 - INFO - Guard is using NotifySend to send notifications.
10:07:49 - INFO - Guard is using TerminalTitle to send notifications.
10:07:49 - INFO - Guard is now watching at '/home/cfntest/.../cloudformations/location'
[1] guard(main)> 


Now that guard is up and running open up a second terminal to the directory you've been working in. We'll now make a couple of changes and watch Guard in action. First we'll make a small change to the text that shouldn't break anything.



# run the sed command to change the email address - shouldn't break

$ sed -i -e 's/testymctest@example.org/test@example.org/' example-sns-email-topic.json

# in the term running Guard we see -

10:12:31 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
PARAMETERS  AutoScaleSNSTopic  test@example.org  false  Email address for notifications.


On my desktop the validation output is a lovely terminal green and I also get a little pop-up in the corner telling me the validate was successful. Leaving Guard open, we'll run a breaking change.



# run the sed command to remove needed quotes - this will not end well

$ sed -i -e 's/"Ref" :/Ref :/' example-sns-email-topic.json

# in the term running Guard we see -

10:13:51 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
cfn-validate-template:  Malformed input-Template format error: JSON not well-formed. (line 19, column
 27)
Usage:
cfn-validate-template
       [--template-file  value ] [--template-url  value ]  [General Options]
For more information and a full list of options, run "cfn-validate-template --help"
FAILED: example-sns-email-topic.json


The 'FAILED: example-sns-email-topic.json' line is displayed in less welcome red, the dialog box pops up again and we know that our last change was incorrect. While this isn't quite as nice as having vim running the validate in the background and taking you directly to the erroring line it's a lot easier to plumb in to your tool chain and gives you 80% of the benefit for very little effort. For completeness we'll reverse our last edit to fix the template.



# sed command to fix the template

$ sed -i -e 's/Ref :/"Ref" :/' example-sns-email-topic.json

# in the term running Guard we see -

10:22:42 - INFO - Validating: example-sns-email-topic.json
Validating example-sns-email-topic.json...
PARAMETERS  AutoScaleSNSTopic  test@example.org  false  Email address for notifications.


One last config option that's worth noting is ':all_on_start => false' from the Guardfile. If this is set to true then, as you'd expect by the name, all CloudFormation templates that match the watch will be validated when Guard starts. I find the validates to be quite slow and I often only dip in to a couple of templates so I set this to off. If you spend more focused time working on nothing but templates then having this set to 'true' gives you a nice early warning in case someone checked in a broken template. Although your git hooks shouldn't allow this anyway. But that's a different post.

After reading through the validate errors of a couple of days work it seems my most common issue is from continuation commas. It's just a shame that CloudFormation doesn't allow trailing commas everywhere.

Posted: 2014/03/01 10:44 | /tools | Permanent link to this entry


books career cloud events firefox geekstuff linux meta misctech movies nottech perl programming security sites sysadmin tools tools/ansible tools/commandline tools/gui tools/network tools/online tools/puppet unixdaemon

Copyright © 2000-2013 Dean Wilson :: RSS Feed