The OpenBSD IPSec kerfuffle

By now you’ve probably heard of the allegations Theo forwarded to the OpenBSD-tech mailing list about the FBI introducing back doors in early versions of the OpenBSD IPSec code.  I’d like to offer my opinion, in the spirit of the Christmas season:

“Bah, humbug!”

It’s possible, but unlikely.  Like me winning the lottery is unlikely.  I’d need to buy a ticket, and that isn’t going to happen any time soon.

The OpenBSD group examines every line of code that goes into their tree.  Any obvious back door would be caught.  Any  subtle back door would be fragile — so subtle that it probably wouldn’t survive the intervening ten years of code churn and IPSec improvements.  Maybe someone has an appliance based on, say, OpenBSD 2.8 or 3.2, which could have contained the back door.  If true, we need to know about it.  But those users need to upgrade anyway.

And the FBI?  Nope, don’t believe it.  Ten years ago, the FBI was having lots of trouble understanding the Internet.  The NSA, maybe.

Bugs?  Sure, there’s probably bugs.  I expect we’ll find some, now that many eyes have turned to the code.  Exploitable bugs?  Maybe.  But that’s not the same as a back door.

OpenBSD has claimed to be the best for many years.  That claim motivates people to take them down.  The claims have hopefully inspired many people to examine the current and historical IPSec stack.  Theo and company have done nothing to discourage such audits: they’ve even offered pointers on where to look.  If you’re a programmer looking to make a splash, you could do worse than to join in on auditing the code.  Finding the alleged back door would make your reputation.  And we can always use more IPSec hackers.

The real impact might be, as Jason Dixon points out, the cost in OpenBSD developer time.  You know that some of their committers are examining the IPSec code today, trying to find potential back doors.

designing a tech book

So, you’ve figured out some incremental advance in your own education that you think would make a good book.  Now you grab your keyboard, open a text file, and start typing.

Not so fast.

Just as a large programming project has a specification, a book has a design.  You can start churning out text just as you might churn out code, but eventually you’ll have to stop and think about what you’re doing.  Time spent designing your book beforehand will pay off in the actual writing of the book.  Here’s how I do it; other authors have their own methods, of course, and you should do whatever works for you.

When I consider writing a book, one of the first things I do is write down what I think should be in the book, without any detail whatsoever.  If I’m writing a book on a particular operating system, it should include installation instructions, guidance on the community around the operating system, the sysctl setting that bit me yesterday, upgrade instructions, and so on.  Everything that I think of, both large and small, trivial and vital.

Keep this list with you as you experiment and learn.  When you think of something else that should go in the book, add it to the list.  No detail is too small.  If your idea is strong enough and sufficiently broad to carry an entire book, you’ll eventually have 30-40 pages of “stuff.”

Take a couple hours and try to sort this list into broad categories.  Move details around from category to category.  See what facts they belong with.  Those categories will eventually become chapters.

Now comes the annoying bit.  Think about what the reader needs to know before he can understand one of these sections. A properly designed book can be read from beginning to end, each paragraph presenting the reader with new knowledge that is built on top of what he’s read before.  This order of information is vital to reader comprehension.

For example, look at Absolute FreeBSD.  In my original notes, I had a category called “Managing Disks.”  I had another category called “Security.”  The book needed to cover both topics.  But what order should they go in?

  • To use some advanced security tools, such as jails, the reader must be able to manage system disks.  Put the disk management section first.
  • The first thing a sysadmin needs to do is create a nonprivileged account for his routine work.  User management is definitely part of security.  The security section must go first.
  • Encrypting a disk partition is part of security… and part of disk management.  You need a decent understanding of both before you can manage an encrypted partition.
  • You don’t have to understand security to manage software RAID, so it’s irrelevant either way.
  • The reader needs both before he can realistically upgrade his system.

The solution:  split the disk category into Chapter 8 and Chapter 18, and split security into Chapter 7 and Chapter 9.  Repeat this process for every category in your list.  Some categories will be too small to make up an entire chapter, and you’ll find that you have to shuffle them into another chapter.  User management could have been its own chapter, but it was small enough that I added it to the first system security chapter.

Assess each category based on what the reader needs to know before he can understand it, and place it appropriately in order.  Spend some time on this; fixing ordering problems is easiest before you start writing.

Then arrange topics within categories.  My first security chapter included both user management and file flags.  Which one should go first?  Separate your notes into further piles within each category.

Now that you have a list of chapters, and a sample of stuff in each chapter, fill out your chapters.  Your long list might have said “add users” and “disable users.”  Now that you have both notes side-by-side, you can see that you should probably add “remove users.”  Read man pages on the topics.  Fill in stuff that you should cover now that you’re being methodical.

If you have to move chunks from one category to another, that’s fine.  Pick up the whole hierarchy and move it.

Eventually, your mass of points will become a design for your book.  Set it aside for a few days, and come back to it with fresh eyes.  Are any of the chapters far too short?  Combine them.  Do any early chapters require knowledge that you’ll cover in later chapters?  Rearrange them.  Did you miss any topics?  Add them.  Are any of these topics redundant, uninteresting, or otherwise bogus?  Cut them.

These days I do all this rearranging using OpenOffice and bullets.  That’s not ideal — I’d like to be able to expand and shrink chapters, and have checkboxes for each item so I could record when it was done.  I used yank for several years, but it no longer builds and I haven’t found anything equally flexible and simple.  If you have a comfortable tool, use it, but the point is, you don’t need much more than a text editor.

With careful thought, this will create a book outline that’s both useful to you and (comparatively) easy to write.

The Wikileaks/BSD connection

I was amused to discover the connection between Wikileaks and BSD.

Apparently Julian Assange hung around the BSD community up until ten years ago, and has a few entries in the NetBSD fortune files.  (Search for Julian Assange in the file, or just click on the next link for the best ones.)    He lived in the house where Greg Lehey grew up, although many years after Greg had moved on.  Greg was interviewed for a story in the Australian news. They botched it.

If you think about it, you’d realize the connection must go deeper than that.  We all know about Osama bin Lehey.  Apparently the house where Greg was raised has that effect on people.  I do believe that Lovecraft wrote a story about that… and it will bug me until I can remember which story that was.

writing tech books: write what you don’t know

This is the first of an irregular series on writing tech books. I got the idea from something Richard Beijtlich wrote in a review years ago.  Unfortunately I cannot find the cite, and spending the day exhaustively reading my old reviews is psychologically unhealthy, but he said something along the lines of “tech authors could do worse than see how Lucas does things.”  I believe that the average quality of writing in tech books is abominable.  Perhaps I can pull those standards up.  By the ear, if necessary.

So, where do you start to write a tech book?  Start by deciding what you want to write about.  One of the old cliches about writing is that you should “write what you know.”  In tech writing this is not only wrong, but actively harmful.

Writing about something you already understand, in a way where you can communicate your knowledge to the uninitiated, is hard.  You brain contains a lot of information, and it’s all jumbled together, interconnected.  If you think your desk is messy, your brain is worse.  Teasing out what you know, and how you know it, with the necessary context to explain it to someone else, is hard.

Instead, I recommend writing about something you want to know about.

When I started writing PGP & GPG, I certainly wasn’t an encryption ninja.  I run BSD servers, but when I started writing the FreeBSD and OpenBSD books I didn’t possess the breadth of mastery necessary to write a book about either.  I learned as I wrote the books.  The books actually became structured learning — self-directed, self-designed, at my own pace, but highly effective.

As you learn the topic, take notes.  Write down what what you must do to accomplish a task.  Use script(1) and screenshots.  Get a paper notebook, keep it by your computer, and scribble notes in it.  This gives you a student’s perspective.  As you learn, you’ll see how this new piece of knowledge attaches to the other knowledge in your head.  Notice those mental connections as they happen.  Those are things you should mention in your writing.  Part of your job as an author is to help the reader make those connections inside his own head.

Note that you should choose a topic that is an incremental advance on what you already know. You need a base of knowledge to learn from.  Trying to write on something that seems close to to your skill set, but isn’t truly incremental, will fail.  Suppose I wanted to write a book on Perl.  I write Perl, but my code is appalling.  To write the book on Perl would require that I first become a programmer, then learn Perl.  This is not an incremental education.  If I want to write a book on, say, Tuvan throat singing, I would need considerably more lead time and a much higher budget.  (Plus completely deaf neighbors.)

By writing about the next step in your education, you’ll expand your own knowledge about the topic.  I might not have been a netflow expert when I started writing Network Flow Analysis, but I sure know a heck of a lot more about it than I did when I started.

Another information channel: Twitter

I’ve surrendered to last years’ (or was it the previous year’s) information fad:  Twitter.

You’ll find me at:  http://twitter.com/#!/mwlauthor.  I believe that tweakers call that @mwlauthor?  Anyway, you can expect random crap there as I figure out what I’m doing with it.

I should say right now:  I don’t follow everyone who follows me.  I can’t do it with blogs, I can’t do it on Facebook, there’s no way I could do it on Yet Another Social Media Platform ™.  And the idea of constant interruptions and near-real-time interaction is, frankly, overwhelming.  Authors don’t scale with their readers, sorry… I still run under the Big Giant Lock.

I will click through and look at people as I have time, though.  It’s nice to be able to see who reads my stuff, and what sorts of things those people are interested in.

TechChannel interview published

The video interview I did last month is now available on-line.  It’s about NetFlow, and is based on the Network Flow Analysis book.

I can’t bring myself to watch it.

(Two posts in one day.  This can’t be good.)

UPDATE: No, it’s not good. Apparently, WordPress doesn’t show the links on the front page, even though it shows the complete article. You must click to the individual article to see the link to the interview. I’m sure there’s a perfectly good reason WP behaves this way, but it still feels bogus.

RANCID, Mikrotik, and SSH

I’m a big fan of RANCID.  While RANCID is best known as a management tool for automatically backing up Cisco configs, it also supports much other handware, and is fairly easily extensible.  I’m responsible for several Mikrotik routers, and need to back up their configurations.  People have written scripts for Mikrotik support in RANCID… but they don’t work with SSH, only telnet.  And they don’t work if you run SSH on an unusual port.

After trials, errors, advice from Chris Falz, and more errors and trials, I found that the following RANCID configuration works.

add password YourRouter YourPasswordHere
add user YourRouter YourUsername+ct
add method YourRouter ssh
add sshcmd YourRouter {/usr/local/scripts/microtiklogin.sh}
add noenable YourRouter {1}

Adding +ct to your username turns off color.  Setting an SSH port in RANCID’s usual way didn’t work with the third-party mtlogin script, and the sshcmd variable doesn’t cope with spaces well, so I used an external SSH command script.  This script is just:

#!/bin/sh
exec ssh -p PortNumber $@

My Mikrotik configs are now automatically backed up over SSH.

If you’re looking for a good Perl project, fixing the actual underlying mtlogin and mtrancid SSH functions would be appreciated.

Ubuntu server 10.04 LTS diskless filesystem

A diskless server needs a copy of the operating system files, served from an NFS server.  The Ubuntu docs have a general-purpose tutorial on diskless systems, which suggests copying the files from your NFS server.  My NFS servers are not Ubuntu boxes.  Also, I don’t want to copy from a live system; too many things can happen.  I want a set of Ubuntu server files that I can use to deploy a functional server in a known good state, that complies with the requirements of my environment.  And I need to script it, so I can boot and update my “golden image” server and easily reproduce the same file set. And I want all the routine changes taken care of automatically.

This problem isn’t hard, but I’ve spent a fair amount of time building and rebuilding diskless systems lately, so you get to hear about it.

Install an actual Ubuntu system.  I prefer to install on a virtual machine.  This will become your “golden image.”  When the Ubuntu installer asks for a machine profile, choose OpenSSH server.

  • apt-get update && apt-get upgrade
  • Install required software, such as emacs, tcsh, and configure .
  • install portmap and nfs-common.
  • Install and configure LDAP auth and sudo against LDAP
  • Install and configure ufw.  I’ve seen many attacks against Ubuntu boxes lately, and highly recommend very restrictive firewall rules.  Do not let the world talk to your Ubuntu servers!
  • Make a VM snapshot of your base image, so you can revert to this core functionality
  • Install anything else required to make this a nice clean template for the purpose of this server.

Now mount a directory on another server on the clean server’s /mnt via NFS and tar up the server.

# cd /
# tar -cvpf /mnt/ubuntu1004.tar --one-file-system .

Wait.

The resulting tarball has a few problems.  I don’t want the diskless hosts to all have the same SSH keys, so those files need to be removed. Ubuntu caches the MAC address of attached NICs to maintain consistent interface names across reboots. This cached MAC address will be wrong for the diskless machine. The existing interface configuration will not work on a diskless machine (see below).  Finally, the fstab is wrong for any diskless machine.  The machine will get its hostname from DHCP, rather than from a file.  I therefore remove the troublesome files from the tarball.

# tar --delete -f /mnt/ubuntu1004.tar ./etc/ssh/ssh_host_rsa_key ./etc/ssh/ssh_host_rsa_key.pub ./etc/ssh/ssh_host_dsa_key ./etc/ssh/ssh_host_dsa_key.pub ./etc/udev/rules.d/70-persistent-net.rules ./etc/fstab ./etc/network/interfaces ./etc/hostname


The difficult file is /etc/network/interfaces.  I don’t want to use the server’s network configuration.  My test server boots from either DHCP or with a static IP, and neither will work for a diskless server.  A diskless server needs an /etc/network/interfaces like this:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual

I want to replace the existing ./etc/network/interfaces with one of my own choosing.  Tar won’t let you replace a file in an existing archive, but it will let you add another file of the same name.  I change to a config directory and add this file to my tarball.  Similarly, I need a blank etc/fstab.  I create a fake etc directory in another location, touch etc/fstab, and create a suitable etc/network/interfaces.

# tar --append -f /mnt/ubuntu1004.tar etc/network/interfaces etc/fstab

To use this file, log into NFS server, go to the mount point for the diskless system, and run:

# tar -xpf /path/ubuntu1004.tar

The machine will then boot, is easily cloned, built to my standards, and the only customization needed is to run dpkg-reconfigure openssh-server.

As I installed on a virtual server I can snapshot the golden image and build custom filesystems for different purposes.

Lots of long commands?  Yep.  This basically screams “8-line shell script, please.”  It’s a pretty trivial script, but if you’ve made it this far, you’re either interested in what I’m doing or astonished at my inanity.  In either case, you should get the script too.

#!/bin/sh

mount nfs1:/tmpmount /mnt
cd /
tar -cvpf /mnt/ubuntu1004.tar –one-file-system .

tar –delete -vf /mnt/ubuntu1004.tar ./etc/ssh/ssh_host_rsa_key ./etc/ssh/ssh_host_rsa_key.pub ./etc/ssh/ssh_host_dsa_key ./etc/ssh/ssh_host_dsa_key.pub ./etc/udev/rules.d/70-persistent-net.rules ./etc/fstab ./etc/network/interfaces ./etc/hostname

cd /home/mwlucas/fakeroot
tar –append -f /mnt/ubuntu1004.tar etc/network/interfaces etc/fstab

Yes, this shell script is a good example of fault-oblivious computing. But it suits my minimal needs, and performs the same task the same way every time.

“Page Cannot Be Displayed” and Internet Explorer

I detest this IE error message, especially when a user calls to complain that a Web site is down. Internet Explorer deliberately hides actual HTTP error messages on the grounds that the Web offers unfriendly but useful error messages.  Apparently this generic message is much less likely to cause the user to flee in terror from insanity-inducing text such as “404 – Page Not Found.”  They effectively shift the induced sanity from the end user to the sysadmin.

There’s a way to turn off this generic friendly message and replace it with the actual error.  It’s under Tools-> Internet Options -> Advanced -> Browsing -> Show friendly HTTP error messages.  Uncheck this and restart the browser to get user-hostile but troubleshooting-friendly error messages.

Every time I need this, I have to scramble to find it.  Perhaps now that I’ve documented this, I’ll remember where it is.  But I doubt it.

On an unrelated note:  tomorrow is the Thanksgiving holiday in the US.  I’d like to remind my readers that the holiday buffet is not a challenge, and that leaving food uneaten is not a threat to your masculinity (or femininity, or whatever).

Firewalling diskless Ubuntu

I have diskless Ubuntu 10.04 servers sitting naked on the Internet.  They’re for internal use only, but I don’t have a firewall in that facility, so any firewalling must be done on the host itself.  Ubuntu includes UFW, the “uncomplicated firewall,” a front end to iptables.  I don’t know how anything can claim to make iptables uncomplicated, but I suppose nobody would use the tool if they called it “less appalling firewall.”

These servers need to be able to contact the Internet, to get updates and such, but nobody except myself and my coworkers need to access these servers. The coworkers and I only come from a limited range of IP addresses.

On a disk-based server, I would define rules in UFW and then run ufw default deny incoming, much like this:

# ufw enable
# ufw allow from 10.0.1.0/24
# ufw allow from 172.16.5.0/24
# ufw default deny

If you do this on a diskless Ubuntu server, the system loses disk — even if you have a rule that specifically permits access to the diskless server. The obvious thing to try is to rip out the “default deny” and replace it with a rule to block unwanted traffic at the end.

# ufw deny from 0.0.0.0/0

Your resulting rules look like this:

# ufw status
Status: active

To                         Action      From
--                         ------      ----
Anywhere                   ALLOW       10.0.1.0/24
Anywhere                   ALLOW       172.16.5.0/24
Anywhere                   DENY        Anywhere

This looks like it should work.  I attempt to connect to the SSH server from an IP not in the permitted list, however, and can connect.  It’s not blocking traffic from denied hosts.  Huh?

Go to the file that contains the user rules, /lib/ufw/user.rules.  This is actually a script to feed to iptables. There are several lines like this, one for each block of management addresses:

### tuple ### allow any any 0.0.0.0/0 any 10.0.1.0 in
-A ufw-user-input -s 10.0.1.0 -j ACCEPT

My last rule, however, looks different.

### tuple ### deny any any 0.0.0.0/0 any 0.0.0.0/0 in
-A ufw-user-input -j DROP

The “all other IP addresses” is probably implied in that last rule, but… it really couldn’t be that simple, could it?  I edit the script to explicitly specify the source IP addresses:

-A ufw-user-input-s 0.0.0.0/0 -j DROP

and reboot.

And yes, it is that simple.  The firewall comes up at boot.  ufw status displays exactly the same rules as before.  But now, I can only connect from my management IP addresses.

The problem with tools that make things “uncomplicated” is that rather than removing the underlying complexity, they hide it. I probably need to break down and learn iptables, but I think I’d rather figure out how to get these hosts behind a PF box.