FreeBSD-update vs bind99-base

My master nameserver runs BIND 9.9, so I can do DNSSEC easily. I’ve installed from ports, but used the REPLACE_BASE option so that it overwrites the BIND 9.8.3 install included in the base system. That way I don’t have to worry about having multiple versions of the same command on different systems.

I patch this system via freebsd-update. After applying the latest security patches, I got the following email:

The following files will be updated as part of updating to 9.1-RELEASE-p3:
/usr/bin/dig
/usr/bin/host
/usr/bin/nslookup
/usr/bin/nsupdate
/usr/sbin/ddns-confgen
/usr/sbin/dnssec-dsfromkey
/usr/sbin/dnssec-keyfromlabel
/usr/sbin/dnssec-keygen
/usr/sbin/dnssec-revoke
/usr/sbin/dnssec-settime
/usr/sbin/dnssec-signzone
/usr/sbin/lwresd
/usr/sbin/named
/usr/sbin/named-checkconf
/usr/sbin/named-checkzone
/usr/sbin/named-compilezone
/usr/sbin/named-journalprint
/usr/sbin/rndc-confgen

I don’t want freebsd-update to patch these files. I also don’t want to get an email every day telling me that I need to patch them. I know I don’t need to patch them.

The solution? Tell freebsd-update to ignore these files with the IgnorePaths directive in /etc/freebsd-update.conf. I copied the list of files from the email and added IgnorePaths before them.

...
IgnorePaths /usr/bin/dig
IgnorePaths /usr/bin/host
IgnorePaths /usr/bin/nslookup
IgnorePaths /usr/bin/nsupdate
IgnorePaths /usr/sbin/ddns-confgen
IgnorePaths /usr/sbin/dnssec-dsfromkey
IgnorePaths /usr/sbin/dnssec-keyfromlabel
IgnorePaths /usr/sbin/dnssec-keygen
IgnorePaths /usr/sbin/dnssec-revoke
IgnorePaths /usr/sbin/dnssec-settime
IgnorePaths /usr/sbin/dnssec-signzone
IgnorePaths /usr/sbin/lwresd
IgnorePaths /usr/sbin/named
IgnorePaths /usr/sbin/named-checkconf
IgnorePaths /usr/sbin/named-checkzone
IgnorePaths /usr/sbin/named-compilezone
IgnorePaths /usr/sbin/named-journalprint
IgnorePaths /usr/sbin/rndc-confgen
...

The complication here is that I must watch out for BIND security advisories, rather than just trusting in the update process. But that’s normal.

Basic Ansible Playbooks

Ansible is a tool for managing servers en masse, much like Puppet or CFEngine. Ansible has a shallower learning curve than either of those systems, however, and it’s idempotent. How do I know it has a shallower learning curve? Because I learned enough of it to do actual useful work in only a couple of hours.

And before you reach for a dictionary, “idempotent” means that you can run the same script against your servers and have the end result be the same. You can run an Ansible script (or playbook) against a group of servers, take note of those that fail, modify the script, and run it again against the same group of servers, and Ansible will verify that the servers need the playbook run before running it. Only the servers that need the change will get it.

Why would this ever happen? Maybe a datacenter is cut off by a network issue, or a LDAP server chokes, or gremlins invade a server, or a script fails because an intruder has hacked the server and this is your early warning. Your management tools need to deal with all of these.

For example, I have an Ansible playbook that uploads a new PF configuration file and reloads the PF rules. Ansible compares the existing PF configuration to one being distributed, and if the file hasn’t changed, doesn’t reload the rules. This isn’t a huge deal for PF, but for some applications it’s vital.

Another nice feature about Ansible is that it uses only SSH and Python. Most Unixes acquire Python as an application dependency somewhere along the way, and it’s small enough that I have no real objection to installing on servers without it. And both Puppet and CFEngine have dedicated agent software, so some kind of agent is going to wind up on the managed machine anyway.

The biggest problem I had with Ansible was with playbooks. There’s a whole bunch of playbook documentation, and Ansible ships with sample playbooks, but they’re written somewhat like man pages, for people who already have some clue about the topic. So here are a couple really rudimentary Ansible playbooks, with explanations.

---
- hosts: pf
  user: ansible
  sudo: yes
  tasks:
  - name: copy pf.mgmt.conf to servers
    action: copy src=/home/ansible/freebsd/etc/pf.mgmt.conf 
      dest=/etc/pf.mgmt.conf owner=root group=wheel mode=0644
    notify:
      - reload pf

  handlers:
    - name: reload pf
      action: shell /sbin/pfctl -f /etc/pf.conf

Ansible playbooks are written in YAML, Yet Another Markup Language. I concede that my first thought on hearing the words “yes another markup language” is “that statement needs some obscenities between the second and third word.” But XML would be way overkill for Ansible. (And the YAML folks have changed their name to a different acronym, Yaml Ain’t Markup Language, trying to escape the stigma of being yet another bleeping bleepety-bleep markup language.)

All YAML files start with a triple dash. They are space-sensitive — don’t use tabs, only spaces.

At the top level (no indents), we have the triple dash and a dash

---
- hosts: pf

The leading hyphen basically means “new thing here,” as far as I can tell.

At the second level of configuration, indented two spaces, we have five sections: hosts, user, sudo, tasks, and handlers.

The hosts statement gives the name of a group of hosts. Ansible has an easily understood hosts file. This playbook applies to a group of hosts called “pf.”

The user definition tells ansible which user to use. Ansible should SSH into the target servers as the user “ansible.”

The sudo statement tells ansible to use sudo to perform this command. My ansible user on each host has sudo privileges, but needs a password. We’ll get an opportunity to enter the password when we run the playbook. (Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t.)

The tasks section is where things get interesting. We actually do stuff here. I define a third level of indentation (four spaces) after the tasks statement, and start them with a dash.

Our first task has a name, “copy pf.mgmt.conf to servers.”

The action that follows uses the Ansible copy module. I define a source file, a destination file, and set the owner and permissions.

The notify statement tells the task to activate the handler named “reload pf” If the action changes a target system, the action triggers the handler. If the action doesn’t change anything, the handler is not triggered.

We then have the handler section. It’s at the same indent level as tasks, sudo, user, and hosts, so it’s a major section. There’s one handler, “reload pf.” It performs one action, fires up a shell and runs a command.

Taken as a whole, this playbook copies a file to all the servers in the pf group and reloads the file. The file pf.mgmt.conf contains the IP addresses of my management hosts, as I discussed elsewhere.

Now let’s look at a slightly more complex playbook that does the same thing.

---
- hosts: linux-internal
  user: ansible
  sudo: yes
  tasks:
  - name: copy iptables.mgmt.conf to servers
    action: copy src=/home/ansible/linux/etc/iptables.mgmt.conf 
      dest=/etc/iptables.mgmt.conf owner=root group=root mode=0644
    notify:
      - reload ipset mgmt
  - name: copy iptables.rules to servers
    action: copy src=/home/ansible/linux/etc/solus.iptables.rules 
      dest=/etc/iptables.rules owner=root group=root mode=0644
    notify:
      - reload iptables

  handlers:
    - name: reload ipset mgmt
      action: shell /usr/sbin/ipset restore -! < /etc/iptables.mgmt.conf
    - name: reload iptables
      action: shell /sbin/iptables-restore -! < /etc/iptables.rules

This playbook updates the firewall rules on my Linux hosts. These CentOS hosts are a little simpler in that they all share a common function (virtualization). They can have a common iptables ruleset as well as a common list of management addresses. I talk about how I use ipsets, and why the rules are set up this way, elsewhere. But the important thing is:

  • This is a single procedure, so it's one playbook.
  • It updates two separate files.
  • Changing each file runs a separate command.

    So, if the iptables.rules file changes, Ansible runs iptables-restore. If iptables.mgmt.conf changes, Ansible runs ipset.

    To use these playbooks, I log in as the ansible user on the ansible server and run:

    $ ansible-playbook -K playbook-file-name.yml

    The -K tells ansible to ask for the sudo password. If your ansible user doen't need a sudo password, skip it. (But beware your spleen.)

    Ansible will log onto every host in the group, check the files, update them if needed, and run the handler commands if it updates the files.

    Ansible has many more modules than just copying files and running commands. It can assemble files from variables, install packages, and more. But a few small playbooks will get you started, and even the basic steps of managing servers firewall rules en masse will save you enough time to figure out the new modules.

    I have no doubt that Puppet and CFEngine have serious use cases and environments where they're the best choice. What my network is most short on is sysadmin brainpower, however, and Ansible is a good fit for my feeble brain.

  • iptables and ipsets

    I’m dragging my work environment from “artisan system administration” to mass-managed servers. Part of this is rationalizing, updating, and centralizing management of packet filter rules on individual hosts. Like many environments, I have a list of “management IP addresses” with unlimited access to every host. Managing this is trivial on a BSD machine, thanks to pf.conf’s ability to include an outside file — you upload the new file of management addresses and run pfctl to read it. A PF rules file looks something like this:

    ext_if="em0"
    include "/etc/pf.mgmt.conf"
    ...
    pass in on $ext_if proto icmp from any to any
    #mgmt networks can talk to this host on any service
    pass in on $ext_if from to any
    ...

    The file pf.mgmt.conf looks like this:

    table const { 192.0.2.0/24, 198.51.100.128/25 }

    When I add new management addresses I copy pf.mgmt.conf to each machine, run pfctl -f /etc/pf.conf, and the new addresses can connect.

    But surely there’s some similar function on a Linux box?

    To complicate matters further, our environment includes both Ubuntu and CentOS machines. (Why? Because we don’t run operating systems, we run applications, and applications get picky about what they run on.) Each version has its own way of saving and restoring iptables rules. I want to use the same method for both operating systems. What we’ve used is a single rules file, /etc/iptables.rules, read by iptables-restore at boot. We specifically don’t want to trust a copy of the packet filter rules saved by the local machine, as problems can persist across reboots. The current iptables.rules looks something like this:

    *filter
    #mgmt addrs
    -A INPUT -s 192.0.2.0/24 -i eth0 -j ACCEPT
    -A INPUT -s 198.51.100.128/25 -i eth0 -j ACCEPT
    #keep state
    -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT
    -A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -j ACCEPT
    -A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT
    -A OUTPUT -p udp -m state --state NEW,ESTABLISHED -j ACCEPT
    #local stuff here
    ...
    #permit ICMP
    -A INPUT -p icmp -j ACCEPT
    -A OUTPUT -p icmp -j ACCEPT
    -A INPUT -i eth0 -j DROP
    COMMIT

    I don’t want to change /etc/iptables.rules for each machine at this point. They all vary slightly. (One day the machines will be classified by roles, but we’re in an intermediate stage right now.) Instead, I want to have the list of management addresses in a separate file. I want to copy the new file to the server, run a command, and have the new list of management addresses be live.

    ipsets seems to be the way to do this. Let’s find out.

    On my crashbox, I’ll create an ipset. I’m using an ipset of type nethash, because it takes CIDR blocks rather than individual IP addresses. The ipset is called mgmt, just like the management addresses on my BSD machines.

    # ipset create mgmt nethash

    It returns silently. Did it create the ipset?

    # ipset list
    Name: mgmt
    Type: hash:net
    Header: family inet hashsize 1024 maxelem 65536
    Size in memory: 16760
    References: 0
    Members:

    OK, it’s in memory. Now add some addresses.

    # ipset add mgmt 192.0.2.0/24
    # ipset add mgmt 198.51.100.128/25

    Are those addresses really in the set? Let’s ask again.

    # ipset list mgmt
    Name: mgmt
    ...
    Members:
    192.0.2.0/24
    198.51.100.128/25

    Now, export this to a file.

    # ipset save mgmt > iptables.mgmt.conf

    I use the file iptables.mgmt.conf to mirror pf.mgmt.conf. That file should contain something like this:

    create mgmt hash:net family inet hashsize 1024 maxelem 65536
    add mgmt 192.0.2.0/24
    add mgmt 198.22.63.128/25
    add mgmt 198.51.100.128/25

    Can I restore the ipset from the file? Destroy the set.

    # ipset destroy mgmt
    # ipset list

    It’s gone. Now to restore it from memory.

    # ipset restore < iptables.mgmt.conf # ipset list
    ...

    All my rules are there.

    Now, let’s teach iptables how to use an ipset. Rather than defining addresses, we use the -m set option.

    # iptables -A INPUT -i eth0 -m set --match-set mgmt src -j ACCEPT

    In the iptables.rule file, it would look like this.

    *filter
    #allow mgmt IPs
    -A INPUT -i eth0 -m set --match-set mgmt src -j ACCEPT
    ...

    When you have several management networks, this is certainly much shorter and easier to read.

    When you update the iptables.mgmt.conf file, read it in with ipset restore. You must use the -! flag. This tells ipset to ignore that the ipset already exists, and restore the contents of the ipset from the file.

    # ipset restore -! < iptables.mgmt.conf

    I can now copy this file to my hosts, run a command, and the packet filter rules are updated, without touching my main rules file.

    I don’t recall anyone using a symbol as a command-line flag like this before, but I actually kind of like this one. “I said DO IT, damn you!”

    “DNSSEC Mastery” now complete, ebook version available!

    You can now get the complete DNSSEC Mastery: Securing the Domain Name System with BIND at Amazon, Barnes & Noble, Smashwords, and my personal ebookstore. It should (hopefully) trickle through to iTunes & such before long.

    This book was a real education to write. Hopefully it will help improve the state of DNS security across the industry. Various DNS experts have expressed approval of the book, and here’s hoping that the wider world will as well.

    The book has now gone on to physical production. Hopefully I will have a proof by BSDCan. We might even auction it off at the end of the con, as the OpenBSD auction did so well.

    Review copies are available for folks who regularly review books.

    If you should find an error in the ebook, please let me know. Converting an OpenOffice document to umpteen different formats for an incredibly wide variety of devices has its risks. I no longer have a PalmPilot to test that format, for example, and I have a specific model of Kindle that’s probably not the same as yours.

    Thanks to everyone for their support.

    Diagnosing “+Limiting icmp unreach response from…” with tcpdump

    Anyone who has run a FreeBSD server for any length of time has seen these messages in their daily security emails. (You do read those, right?)

    +Limiting icmp unreach response from 296 to 200 packets/sec
    +Limiting icmp unreach response from 337 to 200 packets/sec
    +Limiting icmp unreach response from 318 to 200 packets/sec
    +Limiting icmp unreach response from 535 to 200 packets/sec
    +Limiting icmp unreach response from 332 to 200 packets/sec
    +Limiting icmp unreach response from 328 to 200 packets/sec

    Way back in the Bronze Age, I learned that this mean “someone is port scanning.” The usual advice is to disable these messages by setting the sysctl net.inet.icmp.icmplim to 0. This silences the messages. I’m guilty of giving that advice myself.

    What it really means is that something is sending your server UDP packets on a port that isn’t open. This could be a port scanner. It could also be a host legitimately trying to reach your host for a service it thinks you provide, or a service your host should be providing but isn’t.

    I could go to my netflow collector and run a few commands to track down where these packets are coming from. In this case, the problem host is my netflow collector. I’m somewhat leery of using a tool to diagnose itself. An initial check shows that everything on the collector is running, so let’s see if it’s still happening with tcpdump.

    I could run tcpdump -i em0 icmp and see all the ICMP traffic, but that’s inelegant. I don’t want to miss the traffic I’m looking for amidst a torrent of ICMP. And why have my brain filter traffic when ICMP will do it for me?

    The first step is to identify exactly what we’re looking for. ICMP isn’t a monolithic protocol. Where TCP and UDP have ports, ICMP has types and codes. You can find a friendly list of types and codes here, or my readers can look in my Network Flow Analysis.

    ICMP’s “port unreachable” message is type 3, code 3. Unlike TCP ports, the type and code are separate fields. Type 3 is “destination unreachable,” while the code indicates exactly what is unreachable — the port, the network, whatever. Type is ICMP field 0, while code is ICMP field 1. Tcpdump lets you filter on these just like the more familiar port numbers. Enclose more complicated filter expressions in quotes.

    # tcpdump -ni em0 "icmp[0]=3 and icmp[1]=3"
    10:01:03.287063 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.331388 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.356052 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.378256 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.411046 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.437458 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36
    10:01:03.457858 IP 10.250.250.10 > 192.0.2.214: ICMP 10.250.250.10 udp port 11022 unreachable, length 36

    The host 192.0.2.214 is constantly trying to reach my collector on port 11022. 192.0.2.214 is my busiest border router.

    That’s a router. This is a netflow collector. Maybe it’s netflow traffic? Let’s see.

    # tcpdump -ni em0 -T cnfp ip host 192.0.2.214 and udp port 11022
    192.0.2.214.11022 > 10.250.250.10.11022: NetFlow v5, 1897575.270 uptime, 1363184870.488773000, #1285199613, 30 recs
    started 1897571.570, last 1897571.570
    ...

    Yep. Either my router or my collector is misconfigured. And my monitoring system is misconfigured, because it should have caught that the collector process isn’t running. Or I should have noticed that I wasn’t actually getting any flow files from the collector running on another port.

    Now to go back in time, find that young punk who wrote Absolute BSD, and whup his butt.

    Any Firefox add-on people out there?

    I’ve had really good luck asking random people to do work for me, so I’m going to try it again.

    RFC6698 defines the DANE protocol for attaching information to DNSSEC-secured DNS. Notably, you can validate SSL certificates via DNS. This is a game-changer. The key here is the TLSA DNS record.

    Web browsers don’t support this yet, but there is the Extended DNSSEC Validator Firefox add-on at os3sec.com, with source at github.

    If you have the newest version of the add-on installed, sites like https://www.dnssec.michaelwlucas.com/ show up as secure. There is no “invalid certificate” warning. That’s because I’ve published a TLSA record for this zone, telling the browser that the certificate with fingerprint such-and-such on this port on this host is trustworthy. (Install the plugin from the github source, not from the xpi on the site.)

    The interesting thing about this add-on is that it uses libunbound to perform DNSSEC validation at the client. Your local DNS servers don’t need to support DNSSEC. All you need to hack on this plugin is a desktop.

    But the add-on doesn’t support BSD — it’s Linux, MS, and Mac only. The add-on authors don’t have time for BSD support, but gave me a couple hints on how to implement it. The plugin can’t find libunbound on BSD.

    That seems like it would be easy to do. I’m capable of building the add-on from source, but I’ve never programmed any add-ons before. The source code looks like it’s easy to hack, but my efforts all segfaulted Firefox. Obviously, I need more expertise.

    So, if you know anything about Firefox add-ons, or have ever wanted to hack on them, this is your chance.

    DANE and TLSA are the killer applications for DNSSEC. The ability to validate cryptographic certificates via DNS is a game changer. (Cliche, but true.) You can have separate certificates for separate ports on a host. With DANE, there’s no longer any need for a self-signed certificate to be a disadvantage.

    DNSSEC Tech Reviewers Wanted

    Last night, I finished the first draft of DNSSEC Mastery. If you’re one of my fans who wants to see the existing work, a pre-pub version is now available on LeanPub.

    Now I’m looking for people familiar with DNSSEC on BIND to read the book and tell me where I’ve screwed up.

    This book is for an established DNS administrator who wants to deploy DNSSEC. I assume you know what named.conf is, why you don’t put PTR records in a forward zone, and so on. The goal is not to get 100% of the people 100% there, but to get 90% of the people 100% there and ground the other 10% so that they can identify their own rough edges. (The idea is roughly similar to my SSH Mastery or Cisco Routers for the Desperate.)

    The contents are:

      1. Introducing DNSSEC
      2. Cryptography and DNSSEC
      3. How DNSSEC changes DNS
      4. DNSSEC Resolver
      5. dig and DNSSEC
      6. Securing Zone Transfers
      7. KSKs and ZSKs
      8. Signing Zones
      9. Debugging
      10. Key Rotation
      11. Delegations and Islands of Trust
      12. DNSSEC for Data Distribution (needs better title, it’s SSHFP and TLSA)

    Many of these chapters are short. Chapter 10 is not. The writing is rough, especially near the end.

    So, if you know DNSSEC, and you’re interested in spreading the DNSSEC gospel, and you have enough time to read something about half the length of a short paperback novel, contact me via email at mwlucas at my domain.

    I’d need any comments by 15 March. I plan to revise that week and get the book into copyedit, so it can be out for BSDCan. Barring any really appalling revelations from the reviewers, that is. I’d rather the book be late than wrong.

    “DNSSec Mastery” in-progress version available

    By popular demand (mainly on Twitter) I’ve made the work-in-progress version of DNSSec Mastery available on LeanPub.

    This is an experiment. If it works well, I’ll do it again. If not… I won’t.

    Why would you be interested?

      It’s cheap. I intend to sell the finished ebook for $9.99. The work-in-progress version is $7.99. I will continue to update the manuscript on LeanPub until it’s finished.
      Once the manuscript is complete, I’ll raise the LeanPub price to $9.99 to match other vendors.
      If you want to provide feedback on an incomplete book, this is your chance.

    Why would I do this?

      I can usually get subject matter experts to review a book. I have a real problem with getting non-experts to review a book before publication, however. Non-expert feedback is important — those are the people most likely to catch when I explain something poorly, as opposed to the experts who already understand what I’m writing about. I can only handle so much feedback, so I wind up picking a select group of volunteers based on their apparent enthusiasm for the book. Measuring by the results, either I am a poor judge of enthusiasm or enthusiasm is the wrong measurement. This method might work better.
      I get paid earlier. That’s always nice.
      I want feedback from people trying to use it.

      Do I care what you do? No.

      In the long run, sales made via Amazon, B&N, Smashwords, or other ebookstores are better for my career. I’m expecting that only my most hardcore fans will buy the book early. If you’re a hardcore fan, but want to wait for the release of an actual book to buy it, I don’t blame you. I wouldn’t buy an incomplete book.

      But it’s here if you want it.

    Configuration Automation with RANCID

    One of the most tedious tasks any network admin faces is replicating changes across multiple devices. I recently stood up new RADIUS servers, and needed to tell all of my routers and switches about it. Rather than logging into each router by hand and pasting in the new configuration, I decided to try RANCID‘s ability to run arbitrary commands on your routers.

    Using this method requires that the commands you run don’t generate interactive output. A reload command won’t work, because it prompts you for confirmation. But adding configurations to a Cisco router doesn’t.

    I assume you have a working RANCID install.

    Start by creating a text file containing your commands. RANCID expects to log on and log off of the router. All you need to provide is what happens between those two points.

    conf t
    radius-server host 192.0.2.2 auth-port 1812 acct-port 1813 key BuyBooksFromLucas
    radius-server host 192.0.2.14 auth-port 1812 acct-port 1813 key BuyBooksFromLucas
    exit
    wr

    Now use the device-specific login command, specify the file containing your commands with -x, and list every router you want to run the commands on.

    # clogin -x newradius.conf router-1 router-2 router-3

    RANCID logs into each device and add the new configuration. You can watch the process in action, and catch any problems.

    I found that the Mikrotik login script had problems when the script changed the prompt. I’ve reported this to the RANCID mailing list, and expect it will be patched shortly. But fortunately, that’s pretty easy to work around in a Mikrotik, by giving the entire command in one line, as shown below.

    /radius add accounting-backup=no accounting-port=1813 address=192.0.2.2 authentication-port=1812 called-id="" disabled=no domain="" realm="" secret=BuyBooksFromLucas service=login timeout=300ms
    /radius add accounting-backup=no accounting-port=1813 address=192.0.2.14 authentication-port=1812 called-id="" disabled=no domain="" realm="" secret=BuyBooksFromLucas service=login timeout=300ms

    Having RANCID run commands for you is much more accurate and less tedious than doing it yourself. And this way, if you make a mistake in your commands, at least it’ll be consistent across all your devices.

    DNSSec and DLV on current BIND

    One of the problems with the Internet is that old stuff hangs around forever. Configuring DNSSec validation on BIND 9.8 and newer is a lot easier than many of the popular tutorials would lead you to suspect. It’s so simple that I wonder why it isn’t the default.

    options {
    ...
    dnssec-enable yes;
    dnssec-validation auto;
    dnssec-lookaside auto;
    };

    This automatically loads the root zone and dlv.isc.org trust anchors distributed with the BIND source code, verifies them, and uses them to validate all signed responses.

    One trick is that named needs to write journal files for these keys. If you used the directory option to set a directory writable by named, you’re all set. If, like me, you have a whole bunch of directories that you don’t want named to write to, and you don’t want named to write to /etc/namedb, you can set a directory for these named-managed keys.

    /var/log/messages will have errors like this:

    Jan 7 11:59:02 ns11 named[78246]: the working directory is not writable

    These errors are harmless. named cannot write /etc/namedb, but it will log an error when it tries to write a file. The errors you need to worry about look like this:

    Jan 5 13:07:05 ns11 named[68130]: dumping master file: tmp-XI0K5awL6p: open: permission denied

    These indicate that named is trying to record important data, and can’t. Giving named a directory where it can write these managed key files will solve this problem.

    options {
    ...
    managed-keys-directory "managed-keys";
    }

    Make that directory owned by the user running named. Restart (not reload) named, and you’re all set.

    One of the rules of public key encryption is that your private key must be private. Most of us have had private keys stolen at some time. (If you haven’t, either you’re not using public key crypto, or you didn’t notice.) What happens if someone steals the private key for a DNSSec trust anchor?

    There’s a protocol for automatically updating the trust anchor private keys, documented in RFC 5011. It’s not perfect — you’ll still want to keep half an eye out for announcements of a trust anchor compromise.

    On a related note, I’d be interested in hearing from anyone who knows how the root zone private key is protected, just so I can put a sentence or two in the DNSSec book.