November 20143
October 20143
September 20142
July 20141
March 20146
February 20141
January 20145
December 20135
November 20132
October 20134
August 20134
July 20132
Full Archives

Tue, 01 Sep 2009

Ubuntu Developer Week

All this week there are Ubuntu Developer IRC workshops. While I don't actually use Ubuntu at work it's always a good idea to keep up with the new and shiny, and as an extra incentive a lot of the technical details mentioned also apply to Debian, which I do have to admin on a daily basis.

While the IRC logs don't go in to huge details the two sessions I've looked at (getting started and packaging perl modules) each contain enough useful links to make them worth my time.

Posted: 2009/09/01 04:23 | /linux | Permanent link to this entry

Thu, 27 Aug 2009

@reboot - explaining simple cron magic

In a conversation with Stuart the subject of cron timings came up, and after a brief discussion the ugly head of @reboot reared. While most people know that you can use the special 'event' syntax to trigger cronjobs at specific times I'd guess a very small number of them actually know how it works. For example does cron rerun @reboot jobs when the service is restarted? (hint - no it doesn't.)

After a quick discussion on how cron knew the machine hadn't really rebooted we had a short list on how it was doing it - tracking uptime, watching run level states, calling the init script only on certain levels... the only problem is that all of those had obvious issues that stopped them being a good choice. So I dug a little deeper.

First I needed a canary cronjob that would show me when @reboot was actually triggered successfully and a cronjob to run it -

$ vi /home/dwilson/log-cron
logger "Cron ran me"

$ chmod a+rx /home/dwilson/log-cron

# and then the crontab
$ sudo vi /etc/cron.d/logme
@reboot dwilson /home/dwilson/log-cron

Once I had this in place I ran through the possible triggers, changing run levels, stopping and starting the script and changing the uptime were the big three - and none of them worked. In the logs were a number of 'Added a cronjob and got - (CRON) INFO (Skipping @reboot jobs -- not system startup) in syslog when I restarted.' lines instead.

After a quick run under strace I gave up under the sheer weight of output and decided to look at the code. As my test machine was Debian I added a source line for apt and pulled down the packages source.

echo "deb-src http://ftp.uk.debian.org/debian etch main contrib non-free " > /etc/apt/sources.list.d/source.list
apt-get update

mkdir cron-src
cd cron-src

apt-get source cron

Now it was time to do some digging and get some line numbers to look at. In the cron directory I ran some greps to get an overview of possible code locations:

cd cron-3.0pl1/

grep -n -i reboot -r .
grep -i -r WHEN_REBOOT *
grep -n -i '@reboot' -r .

// ... snip ... //
cron.c:284:#define REBOOT_FILE "/var/run/crond.reboot"
cron.c:286:     if (access(REBOOT_FILE, F_OK) == 0) {
cron.c:293:     if ((rbfd = creat(REBOOT_FILE, S_IRUSR&S_IWUSR)) < 0) {
// ... snip ... //

# ls -alh /var/run/crond.reboot
---------- 1 root root 0 2008-11-07 11:07 /var/run/crond.reboot

Looking at the three interesting lines above we see how cron, on Debian at least, knows if it's been a real reboot. It uses the access function to check REBOOT_FILE. Nosing around a little more I also found the creat line and saw that the file had no permissions. The delving was nearly over but there was one thing I didn't get - how did the file get removed?

A quick look at the /var/run Filesystem Hierarchy Standard page cleared this up - 'Files under this directory must be cleared (removed or truncated as appropriate) at the beginning of the boot process.'. Which Debian does in /etc/init.d/bootclean Why is it done on system boot? So that if the system failed it still gets cleaned out.

With a much better idea how this should work, and just to double check, I stopped crond, deleted the /var/run/crond.reboot file by hand and turned crond back on. And my cronjob logged a little line. Not much feedback for all those commands but it was oddly worth it.

Posted: 2009/08/27 21:52 | /linux | Permanent link to this entry

Fri, 11 Jul 2008

Bootstrapping Kickstart for Free

Having spent a (very) little time over the last month fiddling with an existing FAI setup (which is used to install Debian machines) one amazingly insightful feature of Kickstart (a provisioning tool for Redhat and Fedora) has earned a place in my heart - /root/anaconda-ks.cfg.

It might not seem like much, but by having the interactive installer produce a working config that can be reused, the barrier to entry is seriously lowered and makes experimentation much easier. If you want to add a feature to your new machines then just add it to the test install and crib from the config file. Excellent.

Posted: 2008/07/11 22:22 | /linux | Permanent link to this entry

Wed, 09 Jul 2008

Debian and Monolithic Networking Configs - Why?

When it comes to config files the Debian people and I agree on basic principles - we're both keen on applications having a directory where you drop multiple config files to allow for easier deployment and management. Even if they do sometimes seem a little... over zealous (Debian developers? Never!) and you end up with the split Exim4 configs.

So one of the little quirks that I'd like an answer to is, why does Debian have a single big interfaces file and no support for a directory of files? Something like the eth0-cfg files Red Hat uses. It'd make adding additional interfaces a lot easier and would let me use puppet to manage them without breaking in tears every time I try and write a native type for it.

Posted: 2008/07/09 18:43 | /linux | Permanent link to this entry

Tue, 15 Apr 2008

Install Package Dependencies After a DPKG Package Install

I've had a couple of people mail asking about my "frigging apt" comment in a previous post (the last paragraph). It's actually as simple as the comment implies. Here's an example -

  wget http://ftp.dk.debian.org/debian/pool/contrib/v/vmware-package/vmware-package_0.22_i386.deb
  dpkg -i vmware-package_0.22_i386.deb
  apt-get install -f

  # get prompted about installing lots of packages

I don't have any really well thought out reasons to not like this approach - in the few cases I've tried it I've found it to work; it just feels a little... icky.

Posted: 2008/04/15 16:44 | /linux | Permanent link to this entry

Rebuilding Debian Packages - Debian Delvings

Ever wanted a Debian package to be just a -little- bit different? Here's how. While most of the software we're pulling in from Debian is fine for our uses there are a couple of applications that we'd like to be a little different than the stock versions. Rather than go away and package them ourselves (which would require a lot more packing skills and time than I currently have - improving those skills is one of the reasons I'm doing this series) it's possible to download a source version of a Debian package, make a small amendment and then repackage it for personal use. In this example we'll have a look at how to do this with NginX, a very fast and stripped down webserver I'm evaluating for a couple of services.

Firstly let's be nosey and have a look at the Nginx source package. Assuming you have a deb-src line in your /etc/apt/sources.list file getting a copy of the source package is as simple as apt-get source nginx; it's worth noting that this command doesn't need you to be root to run. While we're here let's pull down the dependencies required to rebuild nginx once we've made our modifications - sudo apt-get build-dep nginx - note the build-dep argument - and then pull down the other packages required but are not specific to this package.sudo apt-get install build-essential devscripts fakeroot (sudo or as root).

It's a bit of a tangent (and an annoyance) but the package of Nginx in Debian stable (nginx 0.4.13-2) has a Nginx package removal bug that stops you from removing the package without hacking the init script. I only point this out as when I was trying to learn how to repackage I couldn't remove it and so assumed I'd broken something.

Before we make any amendments to the package let's try and rebuild it. cd in to the versioned source directory (nginx-0.4.13 in our example) and run debuild -us -uc. A few screens of text later and we've got a package - ../nginx_0.4.13-2_i386.deb.

Now we know we can round trip let's be intrusive and make some amendments to the package. We'll add an optional module (http_realip_module) to the nginx package (you should be able to do something like this to add SSL support for example. In this case it's a small, one line, change -

 $ cd nginx-0.4.13 # this is the etch version when I wrote this

 $ vi debian/rules # add '--with-http_realip_module' to the ./configure line.

 $ dch --nmu "Rebuild of package and addition of the http_realip_module module"

 $ vi debian/changelog # change the name and email as required.

 # build the new package with the extra module
 $ DEBFULLNAME="Dean Wilson" DEBEMAIL=dwilson@example.com debuild -us -uc

 $ ls -alh ../nginx_0.4.13-2.1_i386.deb

  # if you have a repo drop it in now and regenerate the index for clients to see it.

dch simplifies adding changelog entries to debian/changelog. The two variables before the command tell it who's made the change. In our case we're logging that we added the new module and that this package is a "Non maintainer upload" (NMU). This is important for future users of the package - in essence we're taking responsibility for the repack away from the oringinal maintainer, who knows nothing of the changes you've made, and making ourselves the correct contact point. And if you don't do this lintian will complain when you build the package.

Closing notes - you may need to pin or hold the package to stop it getting overwritten when future (maintainer) upgrades become available - but that's outside the scope of this post.

Bonus Debian Nginx package Evil. Bad idea but useful for some testing I was doing.

Posted: 2008/04/15 08:11 | /linux | Permanent link to this entry

Creating Virtual Debian Packages - Debian Delving

Now that we have a local apt repository we can start to fill it with our own custom goodness. One of the first things I'm going to need is virtual packages. At work we'll be using them to pull reusable components together in to a number of full applications (and using the apt mechanisms to force upgrades when a component changes) and to group Nagios plugins (we'll be packaging some of those in a later blog post) in to sensible sections; we're going to have a lot of Nagios plugins.

Unfortunately I can't talk about those in detail on a public website that not enough people read, so we'll need a more contrived example. Although I understand the reasons behind it (and appreciate them on servers) the fact that Debian doesn't have a perl package that INCLUDES THE CORE PERLDOC bites me every now and again so we'll make our own perl package that pulls in a handful of the existing packages.

Tangent - I think the three packages should be perl-base, perl-modules and perl-doc which anal people like me can pick for their systems. The perl package should be a virtual one that includes (and forces the install of) all three of those so you get a more intuitive setup for people that don't need huge amounts of customisation. This could also be done with Ruby to hide some of the bat shit splitting of it's libraries under Debian. But that's all personal opinion; and now I've got virtual packages and an apt repo I can do what I want. Bwhahahaha.

While I was trying to work out how to do this I found the equivs package and it's got one of the best descriptions I've ever read:

$ apt-cache show equivs | less
... snip ...
Description: Circumvent Debian package dependencies
 This package provides a tool to create Debian packages that only
 contain dependency information.
 One use for this is to create a meta-package: a package whose sole
 purpose is to declare dependencies and conflicts on other packages so
 that these will be automatically installed, upgraded, or removed.
... snip ...

In additional to being a very clear statement of intent this is obviously the command for us. So how do we use it?

# on the build machine, which also hosts our sample repo.
$ apt-get install equivs

$ equivs-control perl-full

# edit the file to suit - example linked to from the next paragraph
vi perl-full

# build the meta package
equivs-build perl-full

# copy it in to our repo - which lives under 
cp perl-full_1.0_all.deb $repo_base/dists/pool/main/

# rebuild the index
pushd $repo_base && apt-ftparchive generate $repo_base/apt-ftparchive.conf

Here's an inline version of the perl-full example control file that I used.

Section: misc
Priority: optional
Standards-Version: 3.6.2

Package: perl-full
Version: 1.0
Maintainer: Dean Wilson <dean.wilson@example.com>
Depends: perl,perl-doc,libdbi-perl
Architecture: all
Description: Perl install that includes perl-doc and libdbi-perl packages.
 I'm fed up of installing perl and not getting perl-doc. This virtual
 package fixes that for my local systems and any other that can reach my
 apt repo
 I don't recommend anyone else using this as it was written as an
 experiment and to voice how much having to install perl-doc bugs me.
 Oh, and this isn't a good example of a description. Look at real Debian
 packages for much better ones.

On the client you can do an apt-get update ; apt-get install -s perl-full and you should get a list of the packages that will be installed, which on a system that's not already got a perl install should now include perl-doc.

While this example is a bit forced (and very opinionated) the technique and technology behind it are useful tools that are worth knowing.

Posted: 2008/04/15 08:05 | /linux | Permanent link to this entry

Creating a Personal Apt Repository - Debian Delving

Ever wanted your own apt-repo? If not hit the back button about.... now.

My new employers are going to be very Debian heavy on the systems side of the project I'm on so I'm currently in the process of sharpening my Debian specific skills (I've always tried to avoid Unix solutions that were tied to a single OS or distro but in this case we might as well do it The Debian Way).

One of the first things to come up was the need for a local apt repository - for internal packages, third party ones we wanted to use, some backported from other Debian releases and even some rebuilt ones. In this, the first of who knows how many, Debian focused blog post I'll be describing my first pass at setting up a repo for holding these and how I'm using it. I should probably say that this is all from bits and pieces I've picked up across the web so be careful before you blindly follow my advice.

First of all let's set a victory condition - I want to be able to apt-get install puppet and have it install the version currently from testing (Lenny as I write this) under a stable system (Etch) without any pinning, apt magic etc. Before we start, run apt-get install -s puppet | grep 'Inst puppet' and see what version you get back - I get 'Inst puppet (0.20.1-1 Debian:4.0r3/stable)' so it'll be pretty easy for me to see if this works; when it does the version number will change.

We're going to serve packages over http so you'll need a webserver (apt-get install apache2 in my experiments) and a couple of other debian packages (apt-utils and bzip2). Rather than go in to a line by line description grab my simple make-repo.sh Debian Repository creation script and my sample apt-ftparchive.conf.

Once you've got local copies of both, had a quick look at them (you wouldn't run arbitrary shell scripts you've downloaded from the net would you?) and made any config adjustments run make-repo.sh and watch as it adds directories to your system. You now have a skeleton repo on your machine; but without any packages it's about as much use as a Westham fan at a cup final. So let's add a package.

$ cd dists/pool/main/ # this is under your repo base

# grab the puppet version from lenny (at time of writing)
$ wget http://ftp.de.debian.org/debian/pool/main/p/puppet/puppet_0.24.4-3_all.deb

# generate the index files
$ cd /var/www/debian # my repo base
$ apt-ftparchive generate /var/www/debian/apt-ftparchive.conf

On the client side edit /etc/apt/sources.list and add a line that looks like deb http://apt-repo.example.com/debian/ etch main - replacing apt-repo.example.com with your web host. Do an apt-get update and you should get a couple of 'Ign' warnings but all should work. If not, then get debugging. Now for the moment of truth, on the client -

$ apt-get install -s puppet | grep 'Inst puppet'
Inst puppet (0.24.4-3 apt-repo.example.com)

You now have a local apt repository with a sensible version of puppet ready for use by all your Etch hosts. It's also a good building block for fulfilling some of our other requirements. But well get to those in other blog posts. Now go and add the process and website checks to Nagios.

Notes - while you can pull the packages down and dpkg -i them one by one on each machine this requires you to copy them to each host and (this is the annoying part) install the dependencies by hand (yes, you can frig this with apt-get -f and let it try to do this for you but that's horrible). I should also say that if I'm doing any of this horribly wrong then feel free to mail me with corrections. I'd love to know how the Debian pros do it. (don't send me emails with the word "Alone").

Posted: 2008/04/15 08:01 | /linux | Permanent link to this entry

Wed, 21 Mar 2007

The Wonderful World of Kernel Module Removing

All I wanted to do was stop the IPv6 kernel module from starting on boot. It shouldn't be hard, it shouldn't be difficult and despite the early hour of the day, it shouldn't require me to google.

But it seems that it does, as a start point the Planete Beranger Disable IPv6 post shows the many different ways to solve the problem. Unfortunately it seems that the Debian Etch install I'm testing on doesn't like:

  # /etc/modprobe.d/00local
  alias net-pf-10 off
  alias ipv6 off

But it has no problems with a blacklist ipv6 - apart from a number of cases where that might not work and you'll then have to rely on a install ipv6 /bin/true GAH! This hasn't only bitten me, Planete Beranger rants in more detail in the Messy modprobe.conf post.

Amusingly I discovered this while writing a small check to show which of our servers have Ipv6 enabled (we don't use it) but rather than a one off run it'll now have to be a periodic check. It's going to be one of THOSE days.

Posted: 2007/03/21 10:09 | /linux | Permanent link to this entry

Wed, 14 Mar 2007

Linux Laptop Mode and /proc block_dump

Over at the top-like command for disk io thread on GLLUG Kostas Georgiou mentioned a Linux /proc file entry I'd never heard of before, and after some digging it looks like it could be useful when debugging certain IO problems. Assuming you have 2.6.6 or above - or a vendor patched kernel.

When you activate the option with a echo 1 > /proc/sys/vm/block_dump as root (read the article and consider turning syslog off first) the kernel starts to log which processes are accessing which disk blocks and inodes. Below is a small chunk of its output on my test VMWare system:

Mar 14 19:16:44 localhost kernel: sshd(2659): dirtied inode 388836 (sshd) on sda1
Mar 14 19:16:44 localhost kernel: sshd(2659): dirtied inode 533395 (libwrap.so.0) on sda1
Mar 14 19:17:23 localhost kernel: cat(2672): dirtied inode 888805 (.bash_history) on sda1
Mar 14 19:17:46 localhost kernel: kjournald(913): WRITE block 14016 on sda1
Mar 14 19:17:48 localhost kernel: pdflush(104): WRITE block 12487672 on sda1

The short version is, 'dirtied' means changed but not written to disk, pdflush will write the rows out later, and READs are what you'd expect. A brute force way to trace an inode to a file path is with find: find / -inum num. The longer explanation can be found at LJs Extending Battery Life with Laptop Mode under the "Spinup Debugging" heading.

It's no DTrace (and no, SystemTap isn't as good as DTrace) but it is neat and a decent addition to the debugging toolbox.

Posted: 2007/03/14 18:18 | /linux | Permanent link to this entry

Wed, 16 Mar 2005

New Debian Installer

I've spent a lot of time today installing Debian boxes and testing build documents. After using both the old and new installers (Sarge installer RC2) I've come to a bundle of conclusions. The new installer hides a lot of the complexity from most users (use expert mode to get it back). It has a better screen for per-partition options (although it does make you do each one on a separate screen) and it flows a lot better.

On the downside we have the need to set each partitions options on a separate screen (I mentioned it twice as it's a pain), the lack of a "wipe all data but save partition table" option and, most importantly, in expert mode IT PROMPTS FOR PCCARD EVERY SECOND BLOODY SCREEN. And it drove me nuts. After asking me almost a dozen times about the PCCARD it then picked up that the machine wasn't a laptop and asked if it could remove the PCMCIA packages. GAH!

Posted: 2005/03/16 00:27 | /linux | Permanent link to this entry

Thu, 03 Feb 2005

dpkg-statoverride -- Debian Delvings (1)

While I've spent a fair amount of time running around on Linux it's typically been in a mixed Unix environment (Linux, Solaris and HPUX mostly) so my tool-set was comprised of portable applications and scripts. In my current job I'm working with an almost entirely Debian server environment, the few Redhat machines are living on borrowed time as the bosses want them gone.

While this may put a crimp on my cross-platform skills it does give me the chance to delve deeper into the "Debian way", and to be fair it looks like it's got a lot of neat tools.

The first of these is dpkg-statoverride, this simple little script allows you to modify the permissions and ownership of files as they are installed via the package system. At the back-end the /var/lib/dpkg/statoverride file contains a set of mappings, each one has <user> <group> <mode> and <file>. When the given file is installed the permissions are changed as per the line. You can add and remove the entries either through the dpkg-statoverride command or by editing the file by hand (not recommended).

To get the maximum benefits from this you'd want to keep a centralised list of files you want changed and then distribute it to all the machines, I'll be covering some of the details on the hows and whys to do this in future posts about cfengine and pkgsync. All I need to do now is work out how to run the command against already existing files that don't live in the packaging system...

Posted: 2005/02/03 22:57 | /linux | Permanent link to this entry

books career cloud events firefox geekstuff linux meta misctech movies nottech perl programming security sites sysadmin tools tools/ansible tools/commandline tools/gui tools/network tools/online tools/puppet unixdaemon

Copyright © 2000-2014 Dean Wilson :: RSS Feed