Ansible is a tool for managing servers en masse, much like Puppet or CFEngine. Ansible has a shallower learning curve than either of those systems, however, and it’s idempotent. How do I know it has a shallower learning curve? Because I learned enough of it to do actual useful work in only a couple of hours.
And before you reach for a dictionary, “idempotent” means that you can run the same script against your servers and have the end result be the same. You can run an Ansible script (or playbook) against a group of servers, take note of those that fail, modify the script, and run it again against the same group of servers, and Ansible will verify that the servers need the playbook run before running it. Only the servers that need the change will get it.
Why would this ever happen? Maybe a datacenter is cut off by a network issue, or a LDAP server chokes, or gremlins invade a server, or a script fails because an intruder has hacked the server and this is your early warning. Your management tools need to deal with all of these.
For example, I have an Ansible playbook that uploads a new PF configuration file and reloads the PF rules. Ansible compares the existing PF configuration to one being distributed, and if the file hasn’t changed, doesn’t reload the rules. This isn’t a huge deal for PF, but for some applications it’s vital.
Another nice feature about Ansible is that it uses only SSH and Python. Most Unixes acquire Python as an application dependency somewhere along the way, and it’s small enough that I have no real objection to installing on servers without it. And both Puppet and CFEngine have dedicated agent software, so some kind of agent is going to wind up on the managed machine anyway.
The biggest problem I had with Ansible was with playbooks. There’s a whole bunch of playbook documentation, and Ansible ships with sample playbooks, but they’re written somewhat like man pages, for people who already have some clue about the topic. So here are a couple really rudimentary Ansible playbooks, with explanations.
--- - hosts: pf user: ansible sudo: yes tasks: - name: copy pf.mgmt.conf to servers action: copy src=/home/ansible/freebsd/etc/pf.mgmt.conf dest=/etc/pf.mgmt.conf owner=root group=wheel mode=0644 notify: - reload pf handlers: - name: reload pf action: shell /sbin/pfctl -f /etc/pf.conf
Ansible playbooks are written in YAML, Yet Another Markup Language. I concede that my first thought on hearing the words “yes another markup language” is “that statement needs some obscenities between the second and third word.” But XML would be way overkill for Ansible. (And the YAML folks have changed their name to a different acronym, Yaml Ain’t Markup Language, trying to escape the stigma of being yet another bleeping bleepety-bleep markup language.)
All YAML files start with a triple dash. They are space-sensitive — don’t use tabs, only spaces.
At the top level (no indents), we have the triple dash and a dash
--- - hosts: pf
The leading hyphen basically means “new thing here,” as far as I can tell.
At the second level of configuration, indented two spaces, we have five sections: hosts, user, sudo, tasks, and handlers.
The hosts statement gives the name of a group of hosts. Ansible has an easily understood hosts file. This playbook applies to a group of hosts called “pf.”
The user definition tells ansible which user to use. Ansible should SSH into the target servers as the user “ansible.”
The sudo statement tells ansible to use sudo to perform this command. My ansible user on each host has sudo privileges, but needs a password. We’ll get an opportunity to enter the password when we run the playbook. (Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t.)
The tasks section is where things get interesting. We actually do stuff here. I define a third level of indentation (four spaces) after the tasks statement, and start them with a dash.
Our first task has a name, “copy pf.mgmt.conf to servers.”
The action that follows uses the Ansible copy module. I define a source file, a destination file, and set the owner and permissions.
The notify statement tells the task to activate the handler named “reload pf” If the action changes a target system, the action triggers the handler. If the action doesn’t change anything, the handler is not triggered.
We then have the handler section. It’s at the same indent level as tasks, sudo, user, and hosts, so it’s a major section. There’s one handler, “reload pf.” It performs one action, fires up a shell and runs a command.
Taken as a whole, this playbook copies a file to all the servers in the pf group and reloads the file. The file pf.mgmt.conf contains the IP addresses of my management hosts, as I discussed elsewhere.
Now let’s look at a slightly more complex playbook that does the same thing.
--- - hosts: linux-internal user: ansible sudo: yes tasks: - name: copy iptables.mgmt.conf to servers action: copy src=/home/ansible/linux/etc/iptables.mgmt.conf dest=/etc/iptables.mgmt.conf owner=root group=root mode=0644 notify: - reload ipset mgmt - name: copy iptables.rules to servers action: copy src=/home/ansible/linux/etc/solus.iptables.rules dest=/etc/iptables.rules owner=root group=root mode=0644 notify: - reload iptables handlers: - name: reload ipset mgmt action: shell /usr/sbin/ipset restore -! < /etc/iptables.mgmt.conf - name: reload iptables action: shell /sbin/iptables-restore -! < /etc/iptables.rules
This playbook updates the firewall rules on my Linux hosts. These CentOS hosts are a little simpler in that they all share a common function (virtualization). They can have a common iptables ruleset as well as a common list of management addresses. I talk about how I use ipsets, and why the rules are set up this way, elsewhere. But the important thing is:
So, if the iptables.rules file changes, Ansible runs iptables-restore
. If iptables.mgmt.conf changes, Ansible runs ipset
.
To use these playbooks, I log in as the ansible user on the ansible server and run:
$ ansible-playbook -K playbook-file-name.yml
The -K tells ansible to ask for the sudo password. If your ansible user doen't need a sudo password, skip it. (But beware your spleen.)
Ansible will log onto every host in the group, check the files, update them if needed, and run the handler commands if it updates the files.
Ansible has many more modules than just copying files and running commands. It can assemble files from variables, install packages, and more. But a few small playbooks will get you started, and even the basic steps of managing servers firewall rules en masse will save you enough time to figure out the new modules.
I have no doubt that Puppet and CFEngine have serious use cases and environments where they're the best choice. What my network is most short on is sysadmin brainpower, however, and Ansible is a good fit for my feeble brain.
Nice discussion Michael. Thanks. The “Do and “Don’t”s puts a nice emphasis of differences that lowers the learning curve a little further.
Got here via duck duck go for ansible playbooks and I am very glad I did.
Thank you!
I got really excited to give this a try for automating some tasks in our very hybrid unix environment. (We support HP-UX, Tru64, Solaris, and AIX systems.) I was very happy to see that this tool “does not require any agents to be installed on the target machines.” Unfortunately, that’s not completely true. It does not require any “agent” programs to be installed, but it DOES require that every target has python available. Since python is not provided as a standard piece of software by most major unix vendors, this suddenly presents a need to install software on those targets (and it doesn’t matter whether it is an agent or a programming language, it is still software that must be installed.) I can not do this as we have multiple customers and I can’t just push whatever software I like to their machines. This looks pretty cool, but the fact it has this requirement means I’ll be looking for something else to fit my needs.
In your example, what happens if your pf.conf has a typo that would render the host unreachable? I don’t see how (if?) the playbook is validating the pf rules before loading the new set.
A validation step would be useful, certainly. I wrote this before I really understood ansible validation.
It’s really hard to write a packet filter validation system. I don’t think any management system can really prevent a well-aimed bullet from hitting your foot. Even experienced people make that mistake.
I work around that by All of my filter rules, on all operating systems, have a “allow from ansiblehost” right before the “drop block all” at the end.