Announcing DetroitBSDCon: May 14-17 2014

Dan Langille has sold BSDCan to Oracle. From the early announcement, it’s clear that they’ll ruin the conference. I take this VERY personally, as I’ve worked with BSDCan for over a decade. Dan has made it clear that he’s taking the check and walking away without a second thought. This is unconscionable.

If I want something done about it, I’ll have to do it myself.

OpenBSD committer Nick Holland lives about two miles from me. We’ve had some discussions about what needs to happen to give the Western Hemisphere a truly free and unencumbered BSD conference. With Dan’s acceptance of Oracle’s offer, we’ve been forced to put these plans into action. As Nick has no real Internet presence, I’ve been elected to announce our efforts on my blog.

Coming in May 2014: DetroitBSDCon!

Detroit is a major transportation hub, with a well-connected airport and one of the world’s busiest border crossings. People will have no trouble getting here.

Having a conference in Detroit gives us interesting possibilities, however. Traditional conference space is limited, and very hard to get at such late notice. Fortunately, the BSD community is very open to non-traditional conferences.

One of the disadvantages to holding a conference in May is that the weather is just starting to get nice. Most of us have been trapped inside all winter, and now that it’s getting warm enough to be outside we all crowd into a stuffy windowless room for presentations. DetroitBSDCon will be a little different. Allow me to present: the Packard Plant.
PCK-DSC_0149

One square mile of abandoned industrial space, including offices, manufacturing floors, and more. It’s all the space we could possibly use. Each presentation or tutorial will get its own floor. Yes, some parts of the plant are deathtraps, but they’re fairly obvious.

Best of all, we get no end of fresh air. The surrounding area is nice and quiet.

There’s always a chance that the weather will not cooperate. The rental agency providing the chairs, tables, projection gear, and other assorted conference paraphernalia has agreed to throw in a bunch of propane pole heaters as part of the deal.

I work for an ISP, so Internet isn’t a problem. The whole conference will be wireless. Nick has kindly volunteered to climb the water tower and mount the kit for the gigabit wireless uplink.

Accommodations are actually very inexpensive. Detroit hosted the Super Bowl in 2006, and many people opened hotels just for that event. These days, you can get a room for free if you agree to a) not set it on fire, and b) cook meth only in the bathtub.

And dining? Yes, there aren’t many restaurants near the Packard Plant, but we have something better than boring old sit-down restaurants. As the economy has essentially collapsed, the more entrepreneurial folks have opened unofficial dining establishments. You’ll see things like this by every major road.

We’re arranging for dinner to come to you. Detroit has some of the world’s best barbeque and soul food, and it’ll all be there for you. Yes, smelling lunch and dinner cooking might be something of a distraction during the conference presentations, but let’s be real a moment: you go to the presentations to have a chance to work on your laptop in peace. Delicious aromas won’t hamper that in the slightest.

And beer? Another nice thing about living in a collapsed city is that people will deliver beer by the truckload anywhere you want at any time. For a modest extra fee at registration, you’ll get a wristband that gets you free beer throughout the conference. (Speakers get a boozeband for showing up.)

The dates for DetroitBSDCon are the same as those for Oracle BSDCan. Because seriously, how many BSDCan attendees are actually going to go to Oracle BSDCan?

Programming is the hardest and most important part of a conference, and there’s not much time to get papers together. We’ve decided to steal the entire BSDCan programming slate. Because, seriously, those guys aren’t going to want to talk for Oracle.

Speakers won’t need to change their travel arrangements, however. We’ve reserved cars on Canada’s Via Rail train system, leaving Ottawa on Tuesday, Wednesday, and Thursday nights, making the run down to Detroit. It’s Via Rail First class because, again, free booze. They’ll bring you to Windsor overnight, where you’ll hop the bus to the conference venue. We’ll put you up at some of the closest hotels, such as Hot Sheets Central, Scabies R Us, and Bedbugs Bonanza. Yes, they’re lower-end hotels, but seriously, after the University of Ottawa dorms, they’re fine. Plus, free beer.

The after-party will take place Saturday night, on a train back to Ottawa so speakers can catch their flights out the next day.

Now, some speakers might choose to go to Oracle BSDCan. They could. They have free will, after all, and they’re free to make their own decisions even if they’re wrong. In the event we have open spots in the program, Nick and I will fill in with various BSD-related presentations we’ve given over our many years in the BSD communities. We’ve found slides for talks like “Removing IPF from OpenBSD” and “ATAng: Supporting ATA Drives into the 21st Century,” so we’re all set to shore up weak spots in the program.

Best of all, Nick and I promise to never sell DetroitBSDCon. To Oracle.

See you in the ruins in May!

BSDCan sold to Oracle?

I am shocked and appalled. I’ve helped with BSDCan for many many years now, investing my limited time and energy into helping it become the best BSD conference on this side of the planet.

And now Dan Langille has sold the whole thing. To Oracle.

I know that “make something awesome, then sell out to a big company” is standard tech industry practice. But I never expected Langille to figure out a way to sell BSDCan. It never even occured to me that he would sell out our community. Either I have a failure of imagination, or he’s a clever bastard. Or both.

While the BSDCan attendees are getting the Oracle lobotomy, Dan himself will be in Tahiti.

I will not take this lying down. I’m tapping my resources and contacts this morning. With any luck, I’ll have an announcement of my own shortly.

DNSSEC-verified SSL Certificates, the Standard Way

DANE, or DNS-based Authentication of Named Entities, is a protocol for stuffing public key and or public key signatures into DNS. As standard DNS is forged easily, you can’t safely do this without DNSSEC. With DNSSEC, however, you now have an alternative way to verify public keys. Two obvious candidates for DANE data are SSH host keys and SSL certificate fingerprints. In this post I take you through using DNSSEC-secured DNS to verify web site SSL certificates via DNSSEC (sometimes called DNSSEC-stapled SSL certificates).

In DNSSEC Mastery I predicted that someone would release a browser plug-in to support validation of DNSSEC-staples SSL certificates. This isn’t a very difficult prediction, as a few different people had already started down that road. One day browsers will support DANE automatically, but until then, we need a plug-in. I’m pleased to report that the fine folks at dnssec-validator.cz have completed their TLSA verification plugin. I’m using it without problems in Firefox, Chrome, and IE.

DNS provides SSL certificate fingerprints with a TLSA record. (TLSA isn’t an acronym, it’s just a TLS record, type A. Presumably we’ll move on to TLSB at some point.)

A TLSA record looks like this:

_port._protocol.hostname TLSA ( 3 0 1 hash...)

If you’ve worked with services like VOIP, this should look pretty familiar. For example, the TLSA record for port 443 on the host dnssec.michaelwlucas.com looks like this:

_443._tcp.dnssec TLSA ( 3 0 1 4CB0F4E1136D86A6813EA4164F19D294005EBFC02F10CC400F1776C45A97F16C)

Where do we get the hash? Run openssl(1) on your certificate file. Here I generate the SHA256 hash of my certificate file, dnssec.mwl.com.crt.

# openssl x509 -noout -fingerprint -sha256 < dnssec.mwl.com.crt
SHA256 Fingerprint=4C:B0:F4:E1:13:6D:86:A6:81:3E:A4:16:4F:19:D2:94:00:5E:BF:C0:2F:10:CC:40:0F:17:76:C4:5A:97:F1:6C

Copy the fingerprint into the TLSA record. Remove the colons.

Interestingly, you can also use TLSA records to validate CA-signed certificates. Generate the hash the same way, but change the leading string to 1 0 1. I’m using a CA-signed certificate for https://www.michaelwlucas.com, but I also validate it via DNSSEC with a record like this.

_443._tcp.www TLSA ( 1 0 1 DBB17D0DE507BB4DE09180C6FE12BBEE20B96F2EF764D8A3E28EED45EBCCD6BA )

So: if you go to the trouble of setting this up, what does the client see?

Start by installing the DNSSEC/TLSA Validator plugin in your browser. (Peter Wemm has built the Firefox version of the plugin on FreeBSD, and he has a patch and a binary. Use the binary at your own risk, of course, but if you’re looking for a BSD porting project, this would be very useful.)

The plugin adds two new status icons. One turns green if the site’s DNS uses DNSSEC, and has a small gray-with-a-touch-of-red logo if the site does not. Not having DNSSEC is not cause for alarm. The second icon turns green if the SSL certificate matches a TLSA record, gray if there is no TLSA record, and red if the certificate does not match the TLSA record.

So: should you worry about that self-signed certificate? Check the TLSA record status. If the domain owner says “Yes, I created this cert,” it’s probably okay. If the self-signed cert fails TLSA validation, don’t go to the site.

You can use a variety of hashes with TLSA, and you can set a variety of conditions as well. Should all certificates in your company be signed with RapidSSL certs? You can specify that in a TLSA record. Do you have a private CA? Give its fingerprint in a TLSA record. If you want to play with these things, check out my DNSSEC book.

TLSA gives you an alternate avenue of trust, outside of the traditional and expensive CA model. Spreading TLSA more widely means that you can protect more services with SSL without additional financial expenses.

NYCBSDCon 2014 Video, and 2014 appearances

The video of my NYCBSDCon talk is now on available on YouTube.

This talk is a little rougher than most I give. I felt worn-out before I even spoke on Saturday night. I woke up Sunday morning with tonsils the size of tennis balls (which made airport security interesting, let me tell you. “No, those aren’t bombs, let me fly home dang it!”).

So, on the day of NYCBSDCon I was obviously sliding down the ramp into illness.

I don’t script my talks beforehand. Yes, I have bullet points on my slides, but they’re an outline. This leaves me free to shape what I say to fit the audience’s interests and reactions. This also means that if I’m on the verge of falling ill, phrases like “This sucks diseased moose wang” slip into the presentation. It’s not that I object to the term, but it’s stolen from a Harry Dresden novel. I prefer to hand-craft my insults, precisely tailoring each to fit the object of my derision. If you take the trouble to come see me, the least you can expect is originality.

And speaking of speaking:

Early in May, I’ll be at Penguicon. There I’ll be speaking and on panels covering BSD, sudo, SSH, DNSSEC, and writing.

Later in May I’m teaching a four-hour sudo tutorial at BSDCan 2014.

If you want to see me in 2014, these are your only opportunities short of coming to Detroit and joining my dojo. (That’s an option, of course, but there’s better reasons for practicing martial arts than seeing me. Plus, at the dojo you’ll have to try to throw me. That gets tiring quickly.) I’ll have paper books available at both cons.

I have no other public appearances planned for 2014. I intend to spend the rest of the year concentrating on home, writing, and martial arts.

Come on. Hang out. I promise to not use the phrase “diseased moose wang” during any scheduled talk.

Running Ancient Rsync

Another “write it down so I don’t forget what I did” post.

Some of the systems I’m responsible for are file storage machines, running rsync 3.0 or 3.1 as a daemon. Every hour, an ancient Solaris machine sends files to it using rsync 2.3.1. The billing team uses these files to create bills.

Thursday, I rebooted the machine. And the rsync stopped working with:

rsyncd[3582]: rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsyncd[3582]: rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.0]

The rsyncd server hadn’t changed. No security patches, no package updates, no nothing.

We cannot change the software on the Solaris machine. It’s attached to a multimillion-dollar telco switch, and editing the software on it would invalidate the warranty. The whole point of buying a multimillion-dollar telco switch is so you get the warranty. If something goes wrong, a team of vendor experts descends on your facility with enough spare parts to rebuild the switch from the ground up. (Telephony is nothing like IT. Really.) I cannot use SSH to transfer the files. I do not administer this machine–actually, I don’t want to administer this machine. I’m so unfamiliar with warranties on operating systems that I would probably void it by copying my SSH public key to it or something.

The Solaris box is running rsync 2.3.1, which runs rsync protocol version 20. My systems use newer rsync, running protocol version 30 or 31.

Rsyncd isn’t easily debuggable. Packet analysis showed messages about protocol errors. The rsync FAQ has a whole bunch of troubleshooting suggestions. None of them worked. I ran rsync under truss and strace and painstakingly read system calls. I eventually sacrificed a small helpless creature in accordance with ancient forbidden rites under last weekend’s full moon.

After a few days of running through a backup system (an old but not quite ancient OpenSolaris box), I absolutely had to get this working. So: protocol errors? Let’s try an older rsync.

Rsync 2.9? Same problem. I saw myself progressively working my way through building older versions, solving weird problems one by one, and eventually finding something old enough to work. This is not how I wanted to spend my week. Given how well running FreeBSD 4 in a FreeBSD 10 jail works, I tried something similar.

The host ftp-archive.freebsd.org host releases of every FreeBSD version, including packages. FreeBSD 10 includes compatibility with FreeBSD back to version 4. I installed the compatibility libraries from /usr/ports/misc/compat4.

The oldest FreeBSD 4 rsync package I could find was 2.4.6, from FreeBSD 4.1.1. Original FreeBSD packages were just zipped tar files. I extracted the files and checked that the binary could find all its libraries.

# ldd rsync
rsync:
libc.so.4 => /usr/local/lib32/compat/libc.so.4 (0x2808a000)

If this was more complicated software, with more libraries, I’d have to track down the missing ones. Rsync is very straightforward, however.

I shut down the old rsync daemon and fired up the old one.

It worked.

I still want to know how a reboot broke this. I’m assuming that something changed and that I lack the sysadmin chops to identify it. It’s not the rsync binary, or libc; both have date stamps several months old.

I don’t recommend this, as older rsync has all kinds of security problems. These particular hosts are behind several layers of firewalls. If an intruder gets this far, I’m basically doomed anyway.

So: if you’re very very stuck, and the clock has run out, using really old software is an option. But it still makes my skin crawl.

Trying poo-DRE-eh — uh, poudriere

This is my poudriere tutorial. There are many like it. But this one is mine. I built mine with resources like the BSDNow tutorial and the FreeBSD Forums tutorial. While all poudriere tutorials are inadequate, mine is inadequate in new and exciting ways. I’m writing it for my benefit, but what the heck, might as well post it here. (If you read this and say “I learned nothing new,” well, I warned you.)

Your package building system must run the newest version of FreeBSD you want to support. I have 8, 9, and 10 in production, so my package builder needs to be FreeBSD 10 or newer. I’m using FreeBSD 11, because I’m running my package builder on my desktop. I upgraded this machine to the latest -current and updated all my packages and ports.

Poudriere works on either UFS or ZFS partitions. I have copious disk, so I’ll dedicating two of them to poudriere. (This means I’ll have to blow away my install later when I start experimenting with disks, hence this blog posting.) I use disks ada2 and ada3.

First, eradicate anything already on those disks.

# gpart destroy -F ada3
ada3 destroyed
# gpart destroy -F ada2
ada2 destroyed
#

Now I can create new GPT partitions. Each disk needs one partition for ZFS, covering the whole disk. You’ll find lots of discussion about partitioning disks with 512-byte sectors versus those with 4KB sectors, and the need for carefully aligning your disk partitions with the underlying sector size. The easy way around this is to duck the whole issue and assume 4KB sectors, use a null GEOM layer to align everything to the disk’s expectations, and ZFS the null layer. (If you KNOW the sector size of your disk, you can simplify the below, but remember, disks lie about their sector size just like they lie about geometry. It’s safest to gnop all the things.)

# gpart create -s gpt ada2
# gpart create -s gpt ada3
# gpart add -a 4k -t freebsd-zfs -l disk2 ada2
# gpart add -a 4k -t freebsd-zfs -l disk3 ada3
# gnop create -S 4096 /dev/gpt/disk2
# gnop create -S 4096 /dev/gpt/disk3
# zpool create poudriere mirror /dev/gpt/disk2.nop /dev/gpt/disk3.nop

I now have a filesystem to build packages on. Let’s get me a key to sign them with.

# mkdir -p /usr/local/etc/ssl/keys
# mkdir -p /usr/local/etc/ssl/certs
# cd /usr/local/etc/ssl
# openssl genrsa -out keys/mwlucas.key 4096
# openssl rsa -in keys/mwlucas.key -pubout > certs/mwlucas.cert

Installed the latest poudriere from /usr/ports/ports-mgmt/poudriere-devel. While the configuration file is lengthy, you don’t need to set many options.

ZPOOL=poudriere
FREEBSD_HOST=ftp://ftp.freebsd.org
RESOLV_CONF=/etc/resolv.conf
BASEFS=/poudriere
POUDRIERE_DATA=${BASEFS}/data
USE_PORTLINT=no
USE_TMPFS=yes
DISTFILES_CACHE=/usr/ports/distfiles
CHECK_CHANGED_OPTIONS=verbose
CHECK_CHANGED_DEPS=yes
PKG_REPO_SIGNING_KEY=/usr/local/etc/ssl/keys/mwlucas.key
URL_BASE=http://storm.michaelwlucas.com/

I set the CHECK_CHANGED options because when I update my ports tree, I want to know about changed options before I build and deploy my packages. I set the URL_BASE so I can view the build logs on my web server.

The build process uses its own make.conf file, /usr/local/etc/poudriere.d/make.conf, where I set some very basic things. The only mandatory setting is WITH_PKGNG. You could also have a separate make.conf for each individual jail, but I want all of my packages consistent, so I only use the central file.

WITH_PKGNG="yes"
WITHOUT_X11="yes"
WITHOUT_HAL="yes"
WITHOUT_NLS="yes"

Get a ports tree just for poudriere. I could use the ports tree on the package builder, but it’s possible that ports on the builder might differ from what I want for the packages. It’s best to keep everything tidy.

# poudriere ports -c
====>> Creating default fs... done
====>> Extracting portstree "default"...
Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found.
Fetching public key from your-org.portsnap.freebsd.org... done.
Fetching snapshot tag from your-org.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
...

Now create a jail to build packages in.

At first blush, you might give the jail any random name. Using the same jail standard as the FreeBSD package builder eases deployment, however. The official naming standard combines operating system, CPU architecture, operating system version, and word size, like so. For example, 32-bit FreeBSD 10 on Intel-type processors is freebsd:10:x86:32, while the 64-bit version is freebsd:10:x86:64. (See pkg-repository for details.)

FreeBSD’s official package builds occur on the oldest supported version of a release. Packages for 9-stable are built on 9.1, so I do:

# poudriere jail -c -j freebsd:9:x86:32 -v 9.1-RELEASE -a i386

Then create jails for the other two releases I support.

# poudriere jail -c -j freebsd:9:x86:32 -v 9.1-RELEASE -a amd64
# poudriere jail -c -j freebsd:10:x86:64 -v 10.0-RELEASE -a amd64

I only run the amd64 version of FreeBSD 10, because I don’t want to build i386 packages forever.

This takes a little while, so start it and walk away. I copied these lines into a shell script and went to lunch.

Note that these are not actual jails. They will not show up in jls(8). Technically, they’re chroots. (I’m not sure why poudriere calls them jails – maybe they were originally such but the need for a full jail went away, or they’re intended to become full jails later on. I’m sure there’s a perfectly sensible reason, however.) You can list your package-building jails only via poudriere.

# poudriere jail -l
JAILNAME VERSION ARCH METHOD PATH
freebsd:9:x86:64 9.1-RELEASE amd64 ftp /poudriere/jails/freebsd:9:x86:64
freebsd:9:x86:32 9.1-RELEASE i386 ftp /poudriere/jails/freebsd:9:x86:32
freebsd:10:x86:64 10.0-RELEASE amd64 ftp /poudriere/jails/freebsd:10:x86:64

Before building any packages, I want to update all the jails to the latest version. As I’ll need to do this before every package build, I script it. I also add the command to update poudriere’s ports tree.

#!/bin/sh

#update ports tree
poudriere ports -p default -u

#Update known builder jails to latest version
poudriere jail -u -j freebsd:9:x86:32
poudriere jail -u -j freebsd:9:x86:64
poudriere jail -u -j freebsd:10:x86:64

I must run this script every time I update my packages.

Now to determine which packages I want to build. In my case, I want to build only packages that I can’t get from official FreeBSD sources. This means things like freeradius and Apache with LDAP support and PHP with Apache support. Simple enough. I created /usr/local/etc/poudriere.d/pkglist.txt containing:

#web services - need LDAP & Apache PHP module
www/apache22
lang/php5
#network - need LDAP & SMB
net/freeradius2
net/openldap24-server

Now the fun part: setting the build options for these ports.

# poudriere options -cf pkglist.txt

This takes you into a recursive make config for all of your selected packages. If you know exactly which ports you must configure to get a properly built package, you could specify ports by name. Unfortunately, I always forget to add LDAP to some dependency, so I walk through all the configurations adding my various options.

You can now build your packages. I want to update all of my packages simultaneously, so I wrote a trivial shell script to build packages.

#!/bin/sh

#Update known builder jails to latest version

poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:9:x86:32
poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:9:x86:64
poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:10:x86:64

Walk away. Or, if you prefer, set up a web server so you can see the build progress and deliver packages to clients. I used apache22 and created /usr/local/etc/apache22/Includes/poudriere.conf containing:

NameVirtualHost *:80


ServerAdmin mwlucas@minetworkservices.com
DocumentRoot /poudriere/data
ServerName storm.blackhelicopters.org


Options Indexes FollowSymLinks
AllowOverride AuthConfig
Order allow,deny
Allow from all

Yes, I could have hacked up httpd.conf to set DocumentRoot, but I prefer to leave package-created files alone if possible.

Let’s turn to the client while this builds. The client needs a repository configuration file and a copy of the build certificate. Here’s the configuration for my local repository, named mwlucas.

mwlucas: {
url: "http://storm.blackhelicopters.org/packages/${ABI}-default/",
mirror_type: "http",
signature_type: "pubkey",
pubkey: "/usr/local/etc/ssl/certs/mwlucas.cert",
fingerprints: "/usr/share/keys/pkg",
enabled: yes
}

By using the standard jail names, I was able to use the ${ABI} variable in my repository path. This saves me from needing to update my repository configuration every time I upgrade.

I copy my certificate to the directory specified in the pubkey option.

You should be able to browse to the URL given as your repository. If the URL doesn’t work, you misconfigured your web server.

Check your repository configuration with pkg -vv. You should see all configured repositories.

# pkg -vv
...
Repositories:
FreeBSD: {
url : "pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest",
enabled : yes,
mirror_type : "SRV",
signature_type : "FINGERPRINTS",
fingerprints : "/usr/share/keys/pkg"
}
mwlucas: {
url : "http://storm.blackhelicopters.org/packages/freebsd:10:x86:64-default/Latest/",
enabled : yes,
mirror_type : "HTTP",
signature_type : "PUBKEY",
fingerprints : "/usr/share/keys/pkg",
pubkey : "/usr/local/etc/ssl/certs/mwlucas.cert"
}

Two repositories? Good. But can you actually access the repository? A pkg update should pull down the digests for your new repo.

# pkg update
Updating repository catalogue
digests.txz 100% 1928 1.9KB/s 1.9KB/s 00:00
packagesite.txz 100% 10KB 10.0KB/s 10.0KB/s 00:00
Incremental update completed, 23 packages processed:
0 packages updated, 0 removed and 23 added.

A quick check will verify that the private repo has 23 packages.

Now for the weakest part of pkgng: actually using your repository for select packages. If I built everything, I’d just disable the FreeBSD repository. But I want to only build packages that differ from the default.

pkg searches for packages in each repository in order. At the moment, repositories appear in alphabetical order. I could rename my repository so that it appears before FreeBSD. pkg would search my repo for packages, and if they didn’t exist there check the FreeBSD repo.

This would be ideal. But alphabetical repository ordering is not guaranteed. The only way to ensure the packages install from the correct repository is to tell each individual FreeBSD server where it should get specific packages. This kind of sucks, but it’s still an improvement on pkg_add. (I hear that repository handling should improve in future versions of pkg.)

Use the -r flag to specify a repository with pkg. For example, let’s search my private repo for an Apache package.

# pkg search -r mwlucas apache
apache22-2.2.26

The package is there. Excellent. Install it.

# pkg install -r mwlucas apache22
Updating repository catalogue
The following 8 packages will be installed:

Installing expat: 2.1.0 [mwlucas]
Installing perl5: 5.16.3_7 [mwlucas]
Installing pcre: 8.34 [mwlucas]
Installing openldap-client: 2.4.38 [mwlucas]
Installing gdbm: 1.11 [mwlucas]
Installing db42: 4.2.52_5 [mwlucas]
Installing apr: 1.4.8.1.5.3 [mwlucas]
Installing apache22: 2.2.26 [mwlucas]

The installation will require 94 MB more space

19 MB to be downloaded

Proceed with installing packages [y/N]: y

Take careful note of this list of dependency packages installed from your repository.

Here you have to decide how you want to upgrade your server. All packages that you want to upgrade from your repo need to be marked as such. How many of these dependency packages must be upgraded via your repo? That’s a good question. If you took note of which packages you had to change the configuration of, way back in the beginning, you could label only those changed packages as requiring your repo. I didn’t do that. If this was a brand-new machine I would install packages from my repo first and mark everything installed as requiring my repo. I didn’t do that. Instead, I’m going to mark all packages required by apache22 as belonging to my repo, much like this.

# pkg annotate -Ay apache22 repository mwlucas
apache22-2.2.26: added annotation tagged: repository
# pkg annotate -Ay pcre repository mwlucas

Repeat this for each package installed by this package install.

The repository tag appears when you run pkg info on a specific package.

# pkg info pcre | grep repository
repository : mwlucas

pkg upgrade will now only use your repository for the specified package.

Is this long and complicated? Sure. But it beats the snot out of maintaining three separate package repositories under the old package system.

Installing FreeBSD 10 to ZFS with a script

Well, partially scripted, that is.

For installing large numbers of identical machines, proceed directly to the PC-BSD installer. It’s easy to configure, very reliable, and generally just rocks. If you’re accustomed to automatic installers like Kickstart, you’ll find the PC-BSD installer trivially easy.

I frequently have to install non-identical machines for special purposes, such as testing or unique file stores or EDI. Most of these are virtual machines. It seems that ZFS filesystems compress really really well, simplifying backing up the VMs.

And there, the FreeBSD 10 installer’s ZFS features don’t quite cut it. The installer lets you create a single large ZFS without leaving the GUI. I want a more complicated ZFS setup, based on the FreeBSD Root on ZFS wiki page. This process involves a whole lot of typing. I normally install servers when I’m too brain dead to do any real work, so I need to minimize the opportunity for errors.

Fortunately, you can script all the disk and ZFS setup. Here’s my script. If it looks familiar, well, it should: it’s ripped raw and bleeding from the wiki instructions and wrapped up with /bin/sh -x. (I use -x because I want to see how the script runs.)

#!/bin/sh -x

#Auto-divides a ZFS install.
#ZFS permissions stolen from
#https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE

#edit:
#disk device name
#parameters for your zpool type
#your pool name
#swap space

#we're installing
sysctl kern.geom.debugflags=0x10

gpart destroy -F vtbd0
gpart create -s gpt vtbd0
gpart add -s 222 -a 4k -t freebsd-boot -l boot0 vtbd0

gpart add -s 1g -a 4k -t freebsd-swap -l swap0 vtbd0
gpart add -a 4k -t freebsd-zfs -l disk0 vtbd0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 vtbd0
gnop create -S 4096 /dev/gpt/disk0

kldload zfs
zpool create -f -o altroot=/mnt -O canmount=off -m none zroot /dev/gpt/disk0.nop

zfs set checksum=fletcher4 zroot
zfs set atime=off zroot

zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
zfs create -o mountpoint=/tmp -o compression=lzjb -o setuid=off zroot/tmp
chmod 1777 /mnt/tmp

zfs create -o mountpoint=/usr zroot/usr
zfs create zroot/usr/local

zfs create -o mountpoint=/home -o setuid=off zroot/home
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages

zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src
zfs create zroot/usr/obj

zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 /mnt/var/tmp
zpool set bootfs=zroot/ROOT/default zroot

cat << EOF > /tmp/bsdinstall_etc/fstab
/dev/gpt/swap0 none swap sw 0 0
EOF

exit

How do I use this?

When the installer asks me how I want to partition the disk, I exit to the shell, configure the network, and get the script onto the target system.

# dhclient vtnet0
# cd /tmp
# fetch http://www-old.michaelwlucas.com/zfsinstall.sh
# chmod 755 zfsinstall.sh
# ./zfsinstall.sh

Sit back and watch it run. When the script finishes, exit from the shell and let the installer unpack the files. When you’re offered a shell to perform post-install configuration, take it and run the post-install ZFS setup commands.

# mount -t devfs devfs /dev
# echo 'zfs_enable="YES"' >> /etc/rc.conf
# echo 'zfs_load="YES"' >> /boot/loader.conf
# zfs set readonly=on zroot/var/empty

Reboot into your fine-grained ZFS filesystem installation.

To use this script yourself, you’ll need to check the disk device name and the type of zpool you want to create. But this will hopefully get you started.

I would really like to see the default FreeBSD installer create finer grained ZFS filesystems. I’m told that day is coming.

ifup-local on bridge members on CentOS

I run a bunch of CentOS 6 physical servers as QEMU virtualization devices. These hosts have two NICs, one for management and one for virtual machine bridges.

When you use Linux for virtualization, it’s important to increase the amount of memory for network transmit and receive buffers. You also need to disable GSO and TSO, to improve performance and to avoid gigabytes of kernel error messages every day. You can do this with ethtool(8). First, let’s check the existing ring sizes.

# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX: 16384
RX Mini: 0
RX Jumbo: 0
TX: 16384
Current hardware settings:
RX: 512
RX Mini: 0
RX Jumbo: 0
TX: 512

Similarly, use ethtool -k eth0 to check GSO and TSO settings.

The card is using much less memory than it can. When you have a bunch of virtual machines pouring data through the card, you want the card to work as efficiently as possible. Fixing this on a running system is easy enough:

# ethtool -G eth0 tx 16384 rx 16384
# ethtool -K eth0 gso off tso off

Repeat the process for eth1.

How do you make this happen automatically at boot? Adding the commands to /etc/rc.local isn’t reliable. By the time the system gets that much stuff running, the ethtool command might fail with a “Cannot allocate memory” error. If you try again it’ll probably work, but it’s not deterministic. And I’m against running a single command four times in rc.local in the hopes that one of them will work.

Enter /sbin/ifup-local. CentOS runs this script after bringing up an interface, with the interface name as an argument. The problem is, it doesn’t run this script on bridge member interfaces. We can adjust eth0 and br0 at boot just fine, but eth1 (the physical interface underlying br0) doesn’t get run.

You can’t run ethtool -G eth0 tx 16384 rx 16384 on br0. Interface br0 doesn’t have any transmit or receive rings. It’s a logical interface. You can disable TSO and GSO on br0, but that won’t disable it on eth1. You can’t wait to reconfigure eth1 in rc.local until the system is running, because increasing the memory doesn’t always work once the system is running full-out multiuser. And Red Hat says this is by design. Apparently network bridges on CentOS/Red Hat are supposed to perform poorly. That’s good to know.

So, what to do?

I adjust the eth1 ring size in ifup-local when bringing up br0, but before any processes send any traffic over the bridge. My /sbin/ifup-local looks like this:

#!/bin/bash

case "$1" in
eth0)
echo "Configuring eth0..."
/sbin/ethtool -G eth0 tx 16384 rx 16384
/sbin/ethtool -K eth0 gso off tso off
;;

br0)
echo "Configuring br0..."
/sbin/ethtool -G eth1 tx 16384 rx 16384
/sbin/ethtool -K eth1 gso off tso off
/sbin/ethtool -K br0 gso off tso off
;;

esac
exit 0

This appears to work consistently. Of course, the values for the NIC need to be set on a per-machine basis. I have Ansible do that work for me.

Hopefully, this will save someone else the pain I’ve been through trying to make this work…

New reviews

There’s been a few new reviews out lately. First, two from Grant Taylor, on Sudo Mastery and SSH Mastery. Thank you, Grant!

Yesterday, a review of Sudo Mastery appeared on Slashdot. I haven’t been reviewed on Slashdot since Absolute OpenBSD came out. No, not the second edition–the original, in 2003. So this is cool. Thank you, “Saint Aardvark.” (Yes, I can figure out his real name, but if he goes by that, who am I to argue?)

As a result of these reviews, I now simultaneously have the #1 and #4 best-seller slots in Amazon’s Unix category.

I really want to thank everyone who takes the time to review my books — or, indeed, any books. Reviews drive sales. Sales mean that authors can afford to write books instead of washing dishes at the Burger Hut (which is all that most of us are qualified for in the real world). If you enjoy a book, and want to thank the author, take a moment to do so publicly.

And now back to writing more books…

2 titles in Amazon's top 10