It seems that ntpd has turned into the latest DDOS amplifier. I run a lot of servers, and most of them use the standard ntp client. I need to verify that none of my servers can be used for DDOS amplification. To do this, I need to give all the clients a standard NTP configuration, pointing at my personal NTP servers.
While my internal addresses need access to the port 123 on my servers, the public doesn’t. And I occasionally add internal addresses. Automating PF and NTP configuration via Ansible will simplify my life in the future.
I’ve used Ansible templates to configure services before, but packet filters are a little different. Packet filtering rules involve lots of information about the local host, such as interface names and the various system roles. It’s entirely possible to write an Ansible template that expresses your PF ruleset, it just took a little work.
First, I define an Ansible group for the NTP servers (as well as other server duties). The time servers run FreeBSD 9.2.
Here’s the playbook, with added NTP.
--- - hosts: ntp-servers user: ansible sudo: yes tasks: - name: enable ntpdate action: command sysrc ntpdate_enable=yes - name: enable ntpdate server action: command sysrc ntpdate_hosts=pool.ntp.org - name: enable ntpd action: command sysrc ntpd_enable=yes - name: copy ntp.server.conf to servers action: copy src=/home/ansible/freebsd/etc/ntp.server.conf dest=/etc/ntp.conf owner=root group=wheel mode=0644 notify: - restart ntpd - include: tasks/pf-compile.yml handlers: - include: handlers/restarts.yml
Simple enough, no?
Except there’s nothing in here about the packet filter. Or restarting ntp.
These functions are pretty common, so I’ve moved them to separate files. I might need to rebuild the packet filter rules for any number of playbooks, after all.
The file tasks/pf-compile.yml looks like this.
--- #build pf.conf from template - name: configure firewall template: src=/home/ansible/freebsd/etc/pf.conf.j2 dest=/etc/pf.conf owner=0 group=0 mode=0444 validate='/sbin/pfctl -nf %s' notify: - reload pf
This task uses a jinja2 template to build a pf.conf specifically for this host, copies it to the host, validates its syntax, puts it in place, and triggers a PF reload. Always validate your files before deploying them. Ansible doesn’t prevent mistakes, but rather allows you to deploy mistakes faster than ever.
Similarly, I’ve split the “restart services” handlers off into the file handlers/restarts.yml. Here are the relevant bits.
--- #restart assorted services - name: restart ntpd service: name=ntpd state=restarted - name: reload pf action: shell /sbin/pfctl -f /etc/pf.conf
So, where is this firewall template? That’s probably what dragged you here.
#{{ ansible_managed }} #$Id: pf.conf.j2,v 1.2 2014/01/16 16:10:54 mwlucas Exp $ ext_if="{{ ansible_default_ipv4.device }}" include "/etc/pf.mgmt.conf" include "/etc/pf.ournets.conf" set block-policy return set loginterface $ext_if set skip on lo0 scrub in all block in all pass in on $ext_if proto icmp all pass in on $ext_if proto icmp6 all #this host may initiate anything pass out on $ext_if from any to any #mgmt networks can talk to this host on any service pass in on $ext_if fromto any #Allowed services, in port order {% if inventory_hostname in groups['dns'] %} #DNS access pass in on $ext_if proto {tcp, udp} from any to any port 53 {% endif %} {% if inventory_hostname in groups['tftpd'] %} #allow tftp from the world pass in on $ext_if proto udp from any to any port 69 {% endif %} {% if inventory_hostname in groups['ntp-servers'] %} #allow time from our networks pass in on $ext_if proto udp from to any port 123 {% endif %} #end of services
The first bit of new (to me) trickery in this is getting the interface name. I use an Ansible-provided variable for this. Get the complete list of Ansible-provided variables for a host by running ansible -m setup hostname
. The variable ansible_default_ipv4.device contains the network interface name. (If your host has multiple network-facing interfaces, you’ll need to modify this.)
This PF ruleset pulls in two external files, one containing a list of management addresses and one containing the complete list of my internal networks.
I allow access from my management networks, allow ICMP, default block, all the routine packet filter stuff. The next interesting bit is the allowed services. I check for the host’s presence in a group, and if it’s there’ I add a rule to permit the access that protocol needs.
One detail that gave me trouble made me use inventory_hostname rather than ansible_fqdn or ansible_hostname to check group membership. I manage systems in several domains, and many of them have one name in our management systems and another in DNS. I put machines in Ansible by their fully qualified domain name. Using ansible_fqdn to get a hostname returns the hostname given in reverse DNS. inventory_hostname returns the hostname as it appears in the hosts file. If ansible_fqdn doesn’t match the hostname in the hosts file, the group comparison fails. Using inventory_hostname gave me one consistent set of hostnames for comparisons.
So now I can easily deploy a secure NTP configuration to my servers. When I have to deploy some other service that requires updating the packet filter, I can include the same task file. And the handlers are now similarly reusable.
Configuring the clients across several different operating systems will probably require Ansible roles, however. I’d best get on that next…
I like that new Ansible trick. Thanks.