Penguicon 2014 Schedule

“Hey, where is Lucas? Why hasn’t he posted lately?”

I’ve done nothing worth posting about. Most of this month I spent removing a per-millennial switch from the core of the network, which was painstaking and annoying but not noteworthy. I then spent nine days at a writing workshop, which was fascinating, educational, and utterly exhausting. I could argue that the workshop was worth blogging about, but I was too busy writing to waste time writing. If you’re interested in writing, though, and you have a chance to do any of Dean or Kris’ workshops, go.

So:

Next weekend, I’ll be at Penguicon, appearing on various panels. You can see me at the following one-hour events.

Friday

  • 5PM: BSD Operating Systems, a Tour – What it says on the label
  • Saturday

  • 11AM: Sudo – You’re Doing It Wrong – Why your popular sudo configuration is incorrect, and how to do it safely
  • 1PM: Copyright versus Free Information – What happens when the concept of ‘information can’t be contained’ clashes with content creators who want monetary recompense for their hard work? Speakers include:Michael W. Lucas, Shetan Noir, Eva Galperin, Cory Doctorow
  • 6PM: SSH Key Authentication Tutorial – If you’re not doing SSH key authentication, show up here.
  • 8PM: Self-Publishing 101 – Do you? Should you? Various tools and techniques and recommendations.
  • Sunday

  • 2PM: DNSSEC in 50 minutes – How DNSSEC works, and why you should care

    Now if you’ll excuse me, I have a whole great big heap of slides to do…

  • Book Review: “Applied Network Security Monitoring”

    Chris Sanders kindly sent me a review copy of Applied Network Security Monitoring, written by Sanders along with Jason Smith, David J Bianco, and Liam Randall. It’s a very solid work, with much to recommend it to IT people who either have been told to implement security monitoring or who think that they should.

    Some of Applied Network Security Monitoring will be very familiar to anyone who has read any other security book–I’ve read many times that risk equals impact times probability. Every book on this topic needs this information, however, and Sanders and company cover it in sufficient detail to ground a probie while letting the rest of us easily skim it as a refresher.

    Then they take us through selecting data collection points and how they make decisions on where to collect data and what kind of data to collect. Ideally, of course, you collect full packet data everywhere, but in my semi-rural gigabit ISP world I don’t have enough electricity to spin that much disk. Where can you get by with session data, and where do you need full packet capture? ANSM takes you through the choices and the advantages and disadvantages of each, along with some guidance on the hardware needs.

    Data is nice, but it’s what you do with the data that makes security analysis interesting. ANSM uses Security Onion as an underlying toolkit. Security Onion is huge, and contains myriad tools for any given purpose. There’s reasons for this–no one NSM tool is a perfect fit for all environments. ANSM chooses their preferred tools, such as Snort, Bro, and SiLK, and takes you through configuring and using them on the SO platform. Their choices give you honeypots and log management and all the functionality you expect.

    Throughout the book you’ll find business and tactical advice. How do you organize a security team? How do you foster teamwork, retain staff, and deal with arrogant dweebs such as yours truly? (As an aside, ANSM contains the kindest and most business-driven description of the “give the arrogant guy enough rope to hang himself” tactic that I have ever read.) I’ve been working with the business side of IT for decades now, and ANSM taught me new tricks.

    The part of the book that I found most interesting was the section on analysis. What is analysis, anyway? ANSM takes you through both differential analysis and relational analysis, and illustrates them with actual scenarios, actual data. Apparently I’m a big fan of differential diagnosis. I use it everywhere. For every problem. Fortunately, Sanders and crew include guidelines for when to try each type of analysis. I’ll have to try this “relational analysis” thing some time and see what happens.

    Another interesting thing about ANSM is how it draws in lots of knowledge and examples from the medical field. Concepts like morbidity and mortality are very applicable to information technology in general, not just network security monitoring, and adding this makes the book both more useful and more interesting.

    Applied Network Security Monitoring is a solid overview of the state of security analysis in 2014, and was well worth my time to read. It’s worth your time as well.

    postscript

    Not long ago, I reviewed Richard Bejtlich’s The Practice of Network Security Monitoring. What’s more, I have corresponded with both Sanders and Bejtlich, and while they aren’t “help me hide a body” friends I’d happily share a meal with either.

    The obvious question people will ask is, how does Applied NSM compare to tPoNSM?

    Both books use Security Onion. Each book emphasizes different tools, different methodologies, and different techniques. Practical NSM shows Bejtlich’s military background. While Sanders has worked with the military, Applied NSM reads like it’s from an IT background.

    I can’t say either is a better book. Both are very very good.

    Personally, I have never implemented any plan from a book exactly as written. I read books, note their advice, and build a plan that suits my environment, my budget, and–most importantly–my staff. Reading them, I picked between tools and strategies until I found something that would work for my site. Security monitoring is a complex field. Maintaining, let alone building, a security monitoring infrastructure requires constant sharpening of your skills.

    I recommend anyone serious about the field read both books.

    The Con is a Lie

    I hadn’t planned to post this, but enough people asked me that I feel obliged to explicitly state:

    DetroitBSDCon is a joke. So is Oracle buying BSDCan. I did not play off of Dan’s posting: we planned it together, as well as the resulting fight on Twitter. (I must concede that Dan won the Twitter argument by enlisting Randi Harper for Oracle BSDCan. Nobody can stand against @freebsdgirl‘s awesome social networking mojo. Mind you, Dan has absolutely no clue about how we do things here in Detroit.)

    I don’t expect anyone to believe anything posted anywhere on 1 April. Dan and I did not expect to fool anyone, but we did find the idea funny. And so did a lot of other people, so that’s okay. A few folks hate 1 April in general, but they’re not going to change the world. I won’t do gag posts on random days–unless, of course, something is laugh-so-hard-you’ll-herniate-yourself funny and must go on a certain day as part of the joke.

    I’ve done three 1 April gags: this one, the Great Committer in 2011, and FretBSD (also with Dan) in 2003. I only do them if my inspirational muse kicks me in the head.

    A surprising number of people contacted me about DetroitBSDCon — not because they believed it, but because they want me to do it. They don’t care if I hold it in an abandoned factory, they just want DetroitBSDCon to happen. I have run conferences before, but these days I lack the time, energy, and flexibility to do so. Plus, it fails the WIBBOW test. Like, utterly fails the WIBBOW test. Fails with screeching and tears and thrashing about on the ground, running from the test room bawling like a whipped piglet.

    Holding a conference is easy. A lot of work, but it’s very straightforward work.

    If you want a BSD event in your city, here’s what you do.

  • Start small. Try a one-day event, like NYCBSDCon. If you’re successful, up it to two days next year.
  • Find space and a date. The space needs chairs, a screen for slides, projection gear, and clear lines of sight for attendees. mug.org rents a really nice space in the Farmington Hills library. NYCBSDCon found a restaurant with a screen. BSDCan sucks half a dozen rooms off of a university. EuroBSDCon takes over part of a hotel. Space can be expensive, but it doesn’t have to be.
  • Get the date well ahead of time, so people can plan ahead. Don’t overlap a big BSD event.
  • Get speakers. Local speakers are good. Try to coax a couple “big names” into making the trip, sure, but having locals helps make it your conference.
  • Food. People will want to eat. Either have lunch brought in (tricky), or identify the local restaurants that don’t suck. Talk to the restaurant managers before the event; they might do a special rate for a large group at a predictable non-peak time, or at least staff up to handle a flood.
  • Figure out how much all of this costs. Divide by the number of attendees. Double it. That’s your admission rate. Every plan that says “we’ll break even” loses money — you will have unexpected expenses, and everything costs more than the quote. If you make a profit, either use it to bootstrap next year’s con or donate it to various BSD projects the way NYCBSDCon does.
  • About 3PM, everyone starts to drag. Have caffeine, cookies, and for us health-conscious sorts, fruit. (My only critique of NYCBSDCon? No afternoon snack.)

    My conference appearances for this year end in May. I don’t want to travel. But if you have a BSD event within a couple hours drive of Detroit, and it didn’t conflict with my prior commitments, I’d show up. (Or, if you ask politely, I’ll stay home. Whatever you prefer.)

  • Announcing DetroitBSDCon: May 14-17 2014

    Dan Langille has sold BSDCan to Oracle. From the early announcement, it’s clear that they’ll ruin the conference. I take this VERY personally, as I’ve worked with BSDCan for over a decade. Dan has made it clear that he’s taking the check and walking away without a second thought. This is unconscionable.

    If I want something done about it, I’ll have to do it myself.

    OpenBSD committer Nick Holland lives about two miles from me. We’ve had some discussions about what needs to happen to give the Western Hemisphere a truly free and unencumbered BSD conference. With Dan’s acceptance of Oracle’s offer, we’ve been forced to put these plans into action. As Nick has no real Internet presence, I’ve been elected to announce our efforts on my blog.

    Coming in May 2014: DetroitBSDCon!

    Detroit is a major transportation hub, with a well-connected airport and one of the world’s busiest border crossings. People will have no trouble getting here.

    Having a conference in Detroit gives us interesting possibilities, however. Traditional conference space is limited, and very hard to get at such late notice. Fortunately, the BSD community is very open to non-traditional conferences.

    One of the disadvantages to holding a conference in May is that the weather is just starting to get nice. Most of us have been trapped inside all winter, and now that it’s getting warm enough to be outside we all crowd into a stuffy windowless room for presentations. DetroitBSDCon will be a little different. Allow me to present: the Packard Plant.
    PCK-DSC_0149

    One square mile of abandoned industrial space, including offices, manufacturing floors, and more. It’s all the space we could possibly use. Each presentation or tutorial will get its own floor. Yes, some parts of the plant are deathtraps, but they’re fairly obvious.

    Best of all, we get no end of fresh air. The surrounding area is nice and quiet.

    There’s always a chance that the weather will not cooperate. The rental agency providing the chairs, tables, projection gear, and other assorted conference paraphernalia has agreed to throw in a bunch of propane pole heaters as part of the deal.

    I work for an ISP, so Internet isn’t a problem. The whole conference will be wireless. Nick has kindly volunteered to climb the water tower and mount the kit for the gigabit wireless uplink.

    Accommodations are actually very inexpensive. Detroit hosted the Super Bowl in 2006, and many people opened hotels just for that event. These days, you can get a room for free if you agree to a) not set it on fire, and b) cook meth only in the bathtub.

    And dining? Yes, there aren’t many restaurants near the Packard Plant, but we have something better than boring old sit-down restaurants. As the economy has essentially collapsed, the more entrepreneurial folks have opened unofficial dining establishments. You’ll see things like this by every major road.

    We’re arranging for dinner to come to you. Detroit has some of the world’s best barbeque and soul food, and it’ll all be there for you. Yes, smelling lunch and dinner cooking might be something of a distraction during the conference presentations, but let’s be real a moment: you go to the presentations to have a chance to work on your laptop in peace. Delicious aromas won’t hamper that in the slightest.

    And beer? Another nice thing about living in a collapsed city is that people will deliver beer by the truckload anywhere you want at any time. For a modest extra fee at registration, you’ll get a wristband that gets you free beer throughout the conference. (Speakers get a boozeband for showing up.)

    The dates for DetroitBSDCon are the same as those for Oracle BSDCan. Because seriously, how many BSDCan attendees are actually going to go to Oracle BSDCan?

    Programming is the hardest and most important part of a conference, and there’s not much time to get papers together. We’ve decided to steal the entire BSDCan programming slate. Because, seriously, those guys aren’t going to want to talk for Oracle.

    Speakers won’t need to change their travel arrangements, however. We’ve reserved cars on Canada’s Via Rail train system, leaving Ottawa on Tuesday, Wednesday, and Thursday nights, making the run down to Detroit. It’s Via Rail First class because, again, free booze. They’ll bring you to Windsor overnight, where you’ll hop the bus to the conference venue. We’ll put you up at some of the closest hotels, such as Hot Sheets Central, Scabies R Us, and Bedbugs Bonanza. Yes, they’re lower-end hotels, but seriously, after the University of Ottawa dorms, they’re fine. Plus, free beer.

    The after-party will take place Saturday night, on a train back to Ottawa so speakers can catch their flights out the next day.

    Now, some speakers might choose to go to Oracle BSDCan. They could. They have free will, after all, and they’re free to make their own decisions even if they’re wrong. In the event we have open spots in the program, Nick and I will fill in with various BSD-related presentations we’ve given over our many years in the BSD communities. We’ve found slides for talks like “Removing IPF from OpenBSD” and “ATAng: Supporting ATA Drives into the 21st Century,” so we’re all set to shore up weak spots in the program.

    Best of all, Nick and I promise to never sell DetroitBSDCon. To Oracle.

    See you in the ruins in May!

    BSDCan sold to Oracle?

    I am shocked and appalled. I’ve helped with BSDCan for many many years now, investing my limited time and energy into helping it become the best BSD conference on this side of the planet.

    And now Dan Langille has sold the whole thing. To Oracle.

    I know that “make something awesome, then sell out to a big company” is standard tech industry practice. But I never expected Langille to figure out a way to sell BSDCan. It never even occured to me that he would sell out our community. Either I have a failure of imagination, or he’s a clever bastard. Or both.

    While the BSDCan attendees are getting the Oracle lobotomy, Dan himself will be in Tahiti.

    I will not take this lying down. I’m tapping my resources and contacts this morning. With any luck, I’ll have an announcement of my own shortly.

    DNSSEC-verified SSL Certificates, the Standard Way

    DANE, or DNS-based Authentication of Named Entities, is a protocol for stuffing public key and or public key signatures into DNS. As standard DNS is forged easily, you can’t safely do this without DNSSEC. With DNSSEC, however, you now have an alternative way to verify public keys. Two obvious candidates for DANE data are SSH host keys and SSL certificate fingerprints. In this post I take you through using DNSSEC-secured DNS to verify web site SSL certificates via DNSSEC (sometimes called DNSSEC-stapled SSL certificates).

    In DNSSEC Mastery I predicted that someone would release a browser plug-in to support validation of DNSSEC-staples SSL certificates. This isn’t a very difficult prediction, as a few different people had already started down that road. One day browsers will support DANE automatically, but until then, we need a plug-in. I’m pleased to report that the fine folks at dnssec-validator.cz have completed their TLSA verification plugin. I’m using it without problems in Firefox, Chrome, and IE.

    DNS provides SSL certificate fingerprints with a TLSA record. (TLSA isn’t an acronym, it’s just a TLS record, type A. Presumably we’ll move on to TLSB at some point.)

    A TLSA record looks like this:

    _port._protocol.hostname TLSA ( 3 0 1 hash...)

    If you’ve worked with services like VOIP, this should look pretty familiar. For example, the TLSA record for port 443 on the host dnssec.michaelwlucas.com looks like this:

    _443._tcp.dnssec TLSA ( 3 0 1 4CB0F4E1136D86A6813EA4164F19D294005EBFC02F10CC400F1776C45A97F16C)

    Where do we get the hash? Run openssl(1) on your certificate file. Here I generate the SHA256 hash of my certificate file, dnssec.mwl.com.crt.

    # openssl x509 -noout -fingerprint -sha256 < dnssec.mwl.com.crt
    SHA256 Fingerprint=4C:B0:F4:E1:13:6D:86:A6:81:3E:A4:16:4F:19:D2:94:00:5E:BF:C0:2F:10:CC:40:0F:17:76:C4:5A:97:F1:6C

    Copy the fingerprint into the TLSA record. Remove the colons.

    Interestingly, you can also use TLSA records to validate CA-signed certificates. Generate the hash the same way, but change the leading string to 1 0 1. I’m using a CA-signed certificate for https://www.michaelwlucas.com, but I also validate it via DNSSEC with a record like this.

    _443._tcp.www TLSA ( 1 0 1 DBB17D0DE507BB4DE09180C6FE12BBEE20B96F2EF764D8A3E28EED45EBCCD6BA )

    So: if you go to the trouble of setting this up, what does the client see?

    Start by installing the DNSSEC/TLSA Validator plugin in your browser. (Peter Wemm has built the Firefox version of the plugin on FreeBSD, and he has a patch and a binary. Use the binary at your own risk, of course, but if you’re looking for a BSD porting project, this would be very useful.)

    The plugin adds two new status icons. One turns green if the site’s DNS uses DNSSEC, and has a small gray-with-a-touch-of-red logo if the site does not. Not having DNSSEC is not cause for alarm. The second icon turns green if the SSL certificate matches a TLSA record, gray if there is no TLSA record, and red if the certificate does not match the TLSA record.

    So: should you worry about that self-signed certificate? Check the TLSA record status. If the domain owner says “Yes, I created this cert,” it’s probably okay. If the self-signed cert fails TLSA validation, don’t go to the site.

    You can use a variety of hashes with TLSA, and you can set a variety of conditions as well. Should all certificates in your company be signed with RapidSSL certs? You can specify that in a TLSA record. Do you have a private CA? Give its fingerprint in a TLSA record. If you want to play with these things, check out my DNSSEC book.

    TLSA gives you an alternate avenue of trust, outside of the traditional and expensive CA model. Spreading TLSA more widely means that you can protect more services with SSL without additional financial expenses.

    NYCBSDCon 2014 Video, and 2014 appearances

    The video of my NYCBSDCon talk is now on available on YouTube.

    This talk is a little rougher than most I give. I felt worn-out before I even spoke on Saturday night. I woke up Sunday morning with tonsils the size of tennis balls (which made airport security interesting, let me tell you. “No, those aren’t bombs, let me fly home dang it!”).

    So, on the day of NYCBSDCon I was obviously sliding down the ramp into illness.

    I don’t script my talks beforehand. Yes, I have bullet points on my slides, but they’re an outline. This leaves me free to shape what I say to fit the audience’s interests and reactions. This also means that if I’m on the verge of falling ill, phrases like “This sucks diseased moose wang” slip into the presentation. It’s not that I object to the term, but it’s stolen from a Harry Dresden novel. I prefer to hand-craft my insults, precisely tailoring each to fit the object of my derision. If you take the trouble to come see me, the least you can expect is originality.

    And speaking of speaking:

    Early in May, I’ll be at Penguicon. There I’ll be speaking and on panels covering BSD, sudo, SSH, DNSSEC, and writing.

    Later in May I’m teaching a four-hour sudo tutorial at BSDCan 2014.

    If you want to see me in 2014, these are your only opportunities short of coming to Detroit and joining my dojo. (That’s an option, of course, but there’s better reasons for practicing martial arts than seeing me. Plus, at the dojo you’ll have to try to throw me. That gets tiring quickly.) I’ll have paper books available at both cons.

    I have no other public appearances planned for 2014. I intend to spend the rest of the year concentrating on home, writing, and martial arts.

    Come on. Hang out. I promise to not use the phrase “diseased moose wang” during any scheduled talk.

    Running Ancient Rsync

    Another “write it down so I don’t forget what I did” post.

    Some of the systems I’m responsible for are file storage machines, running rsync 3.0 or 3.1 as a daemon. Every hour, an ancient Solaris machine sends files to it using rsync 2.3.1. The billing team uses these files to create bills.

    Thursday, I rebooted the machine. And the rsync stopped working with:

    rsyncd[3582]: rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
    rsyncd[3582]: rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.0]

    The rsyncd server hadn’t changed. No security patches, no package updates, no nothing.

    We cannot change the software on the Solaris machine. It’s attached to a multimillion-dollar telco switch, and editing the software on it would invalidate the warranty. The whole point of buying a multimillion-dollar telco switch is so you get the warranty. If something goes wrong, a team of vendor experts descends on your facility with enough spare parts to rebuild the switch from the ground up. (Telephony is nothing like IT. Really.) I cannot use SSH to transfer the files. I do not administer this machine–actually, I don’t want to administer this machine. I’m so unfamiliar with warranties on operating systems that I would probably void it by copying my SSH public key to it or something.

    The Solaris box is running rsync 2.3.1, which runs rsync protocol version 20. My systems use newer rsync, running protocol version 30 or 31.

    Rsyncd isn’t easily debuggable. Packet analysis showed messages about protocol errors. The rsync FAQ has a whole bunch of troubleshooting suggestions. None of them worked. I ran rsync under truss and strace and painstakingly read system calls. I eventually sacrificed a small helpless creature in accordance with ancient forbidden rites under last weekend’s full moon.

    After a few days of running through a backup system (an old but not quite ancient OpenSolaris box), I absolutely had to get this working. So: protocol errors? Let’s try an older rsync.

    Rsync 2.9? Same problem. I saw myself progressively working my way through building older versions, solving weird problems one by one, and eventually finding something old enough to work. This is not how I wanted to spend my week. Given how well running FreeBSD 4 in a FreeBSD 10 jail works, I tried something similar.

    The host ftp-archive.freebsd.org host releases of every FreeBSD version, including packages. FreeBSD 10 includes compatibility with FreeBSD back to version 4. I installed the compatibility libraries from /usr/ports/misc/compat4.

    The oldest FreeBSD 4 rsync package I could find was 2.4.6, from FreeBSD 4.1.1. Original FreeBSD packages were just zipped tar files. I extracted the files and checked that the binary could find all its libraries.

    # ldd rsync
    rsync:
    libc.so.4 => /usr/local/lib32/compat/libc.so.4 (0x2808a000)

    If this was more complicated software, with more libraries, I’d have to track down the missing ones. Rsync is very straightforward, however.

    I shut down the old rsync daemon and fired up the old one.

    It worked.

    I still want to know how a reboot broke this. I’m assuming that something changed and that I lack the sysadmin chops to identify it. It’s not the rsync binary, or libc; both have date stamps several months old.

    I don’t recommend this, as older rsync has all kinds of security problems. These particular hosts are behind several layers of firewalls. If an intruder gets this far, I’m basically doomed anyway.

    So: if you’re very very stuck, and the clock has run out, using really old software is an option. But it still makes my skin crawl.

    Trying poo-DRE-eh — uh, poudriere

    This is my poudriere tutorial. There are many like it. But this one is mine. I built mine with resources like the BSDNow tutorial and the FreeBSD Forums tutorial. While all poudriere tutorials are inadequate, mine is inadequate in new and exciting ways. I’m writing it for my benefit, but what the heck, might as well post it here. (If you read this and say “I learned nothing new,” well, I warned you.)

    Your package building system must run the newest version of FreeBSD you want to support. I have 8, 9, and 10 in production, so my package builder needs to be FreeBSD 10 or newer. I’m using FreeBSD 11, because I’m running my package builder on my desktop. I upgraded this machine to the latest -current and updated all my packages and ports.

    Poudriere works on either UFS or ZFS partitions. I have copious disk, so I’ll dedicating two of them to poudriere. (This means I’ll have to blow away my install later when I start experimenting with disks, hence this blog posting.) I use disks ada2 and ada3.

    First, eradicate anything already on those disks.

    # gpart destroy -F ada3
    ada3 destroyed
    # gpart destroy -F ada2
    ada2 destroyed
    #

    Now I can create new GPT partitions. Each disk needs one partition for ZFS, covering the whole disk. You’ll find lots of discussion about partitioning disks with 512-byte sectors versus those with 4KB sectors, and the need for carefully aligning your disk partitions with the underlying sector size. The easy way around this is to duck the whole issue and assume 4KB sectors, use a null GEOM layer to align everything to the disk’s expectations, and ZFS the null layer. (If you KNOW the sector size of your disk, you can simplify the below, but remember, disks lie about their sector size just like they lie about geometry. It’s safest to gnop all the things.)

    # gpart create -s gpt ada2
    # gpart create -s gpt ada3
    # gpart add -a 4k -t freebsd-zfs -l disk2 ada2
    # gpart add -a 4k -t freebsd-zfs -l disk3 ada3
    # gnop create -S 4096 /dev/gpt/disk2
    # gnop create -S 4096 /dev/gpt/disk3
    # zpool create poudriere mirror /dev/gpt/disk2.nop /dev/gpt/disk3.nop

    I now have a filesystem to build packages on. Let’s get me a key to sign them with.

    # mkdir -p /usr/local/etc/ssl/keys
    # mkdir -p /usr/local/etc/ssl/certs
    # cd /usr/local/etc/ssl
    # openssl genrsa -out keys/mwlucas.key 4096
    # openssl rsa -in keys/mwlucas.key -pubout > certs/mwlucas.cert

    Installed the latest poudriere from /usr/ports/ports-mgmt/poudriere-devel. While the configuration file is lengthy, you don’t need to set many options.

    ZPOOL=poudriere
    FREEBSD_HOST=ftp://ftp.freebsd.org
    RESOLV_CONF=/etc/resolv.conf
    BASEFS=/poudriere
    POUDRIERE_DATA=${BASEFS}/data
    USE_PORTLINT=no
    USE_TMPFS=yes
    DISTFILES_CACHE=/usr/ports/distfiles
    CHECK_CHANGED_OPTIONS=verbose
    CHECK_CHANGED_DEPS=yes
    PKG_REPO_SIGNING_KEY=/usr/local/etc/ssl/keys/mwlucas.key
    URL_BASE=http://storm.michaelwlucas.com/

    I set the CHECK_CHANGED options because when I update my ports tree, I want to know about changed options before I build and deploy my packages. I set the URL_BASE so I can view the build logs on my web server.

    The build process uses its own make.conf file, /usr/local/etc/poudriere.d/make.conf, where I set some very basic things. The only mandatory setting is WITH_PKGNG. You could also have a separate make.conf for each individual jail, but I want all of my packages consistent, so I only use the central file.

    WITH_PKGNG="yes"
    WITHOUT_X11="yes"
    WITHOUT_HAL="yes"
    WITHOUT_NLS="yes"

    Get a ports tree just for poudriere. I could use the ports tree on the package builder, but it’s possible that ports on the builder might differ from what I want for the packages. It’s best to keep everything tidy.

    # poudriere ports -c
    ====>> Creating default fs... done
    ====>> Extracting portstree "default"...
    Looking up portsnap.FreeBSD.org mirrors... 7 mirrors found.
    Fetching public key from your-org.portsnap.freebsd.org... done.
    Fetching snapshot tag from your-org.portsnap.freebsd.org... done.
    Fetching snapshot metadata... done.
    ...

    Now create a jail to build packages in.

    At first blush, you might give the jail any random name. Using the same jail standard as the FreeBSD package builder eases deployment, however. The official naming standard combines operating system, CPU architecture, operating system version, and word size, like so. For example, 32-bit FreeBSD 10 on Intel-type processors is freebsd:10:x86:32, while the 64-bit version is freebsd:10:x86:64. (See pkg-repository for details.)

    FreeBSD’s official package builds occur on the oldest supported version of a release. Packages for 9-stable are built on 9.1, so I do:

    # poudriere jail -c -j freebsd:9:x86:32 -v 9.1-RELEASE -a i386

    Then create jails for the other two releases I support.

    # poudriere jail -c -j freebsd:9:x86:32 -v 9.1-RELEASE -a amd64
    # poudriere jail -c -j freebsd:10:x86:64 -v 10.0-RELEASE -a amd64

    I only run the amd64 version of FreeBSD 10, because I don’t want to build i386 packages forever.

    This takes a little while, so start it and walk away. I copied these lines into a shell script and went to lunch.

    Note that these are not actual jails. They will not show up in jls(8). Technically, they’re chroots. (I’m not sure why poudriere calls them jails – maybe they were originally such but the need for a full jail went away, or they’re intended to become full jails later on. I’m sure there’s a perfectly sensible reason, however.) You can list your package-building jails only via poudriere.

    # poudriere jail -l
    JAILNAME VERSION ARCH METHOD PATH
    freebsd:9:x86:64 9.1-RELEASE amd64 ftp /poudriere/jails/freebsd:9:x86:64
    freebsd:9:x86:32 9.1-RELEASE i386 ftp /poudriere/jails/freebsd:9:x86:32
    freebsd:10:x86:64 10.0-RELEASE amd64 ftp /poudriere/jails/freebsd:10:x86:64

    Before building any packages, I want to update all the jails to the latest version. As I’ll need to do this before every package build, I script it. I also add the command to update poudriere’s ports tree.

    #!/bin/sh

    #update ports tree
    poudriere ports -p default -u

    #Update known builder jails to latest version
    poudriere jail -u -j freebsd:9:x86:32
    poudriere jail -u -j freebsd:9:x86:64
    poudriere jail -u -j freebsd:10:x86:64

    I must run this script every time I update my packages.

    Now to determine which packages I want to build. In my case, I want to build only packages that I can’t get from official FreeBSD sources. This means things like freeradius and Apache with LDAP support and PHP with Apache support. Simple enough. I created /usr/local/etc/poudriere.d/pkglist.txt containing:

    #web services - need LDAP & Apache PHP module
    www/apache22
    lang/php5
    #network - need LDAP & SMB
    net/freeradius2
    net/openldap24-server

    Now the fun part: setting the build options for these ports.

    # poudriere options -cf pkglist.txt

    This takes you into a recursive make config for all of your selected packages. If you know exactly which ports you must configure to get a properly built package, you could specify ports by name. Unfortunately, I always forget to add LDAP to some dependency, so I walk through all the configurations adding my various options.

    You can now build your packages. I want to update all of my packages simultaneously, so I wrote a trivial shell script to build packages.

    #!/bin/sh

    #Update known builder jails to latest version

    poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:9:x86:32
    poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:9:x86:64
    poudriere bulk -f /usr/local/etc/poudriere.d/pkglist.txt -j freebsd:10:x86:64

    Walk away. Or, if you prefer, set up a web server so you can see the build progress and deliver packages to clients. I used apache22 and created /usr/local/etc/apache22/Includes/poudriere.conf containing:

    NameVirtualHost *:80


    ServerAdmin mwlucas@minetworkservices.com
    DocumentRoot /poudriere/data
    ServerName storm.blackhelicopters.org


    Options Indexes FollowSymLinks
    AllowOverride AuthConfig
    Order allow,deny
    Allow from all

    Yes, I could have hacked up httpd.conf to set DocumentRoot, but I prefer to leave package-created files alone if possible.

    Let’s turn to the client while this builds. The client needs a repository configuration file and a copy of the build certificate. Here’s the configuration for my local repository, named mwlucas.

    mwlucas: {
    url: "http://storm.blackhelicopters.org/packages/${ABI}-default/",
    mirror_type: "http",
    signature_type: "pubkey",
    pubkey: "/usr/local/etc/ssl/certs/mwlucas.cert",
    fingerprints: "/usr/share/keys/pkg",
    enabled: yes
    }

    By using the standard jail names, I was able to use the ${ABI} variable in my repository path. This saves me from needing to update my repository configuration every time I upgrade.

    I copy my certificate to the directory specified in the pubkey option.

    You should be able to browse to the URL given as your repository. If the URL doesn’t work, you misconfigured your web server.

    Check your repository configuration with pkg -vv. You should see all configured repositories.

    # pkg -vv
    ...
    Repositories:
    FreeBSD: {
    url : "pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest",
    enabled : yes,
    mirror_type : "SRV",
    signature_type : "FINGERPRINTS",
    fingerprints : "/usr/share/keys/pkg"
    }
    mwlucas: {
    url : "http://storm.blackhelicopters.org/packages/freebsd:10:x86:64-default/Latest/",
    enabled : yes,
    mirror_type : "HTTP",
    signature_type : "PUBKEY",
    fingerprints : "/usr/share/keys/pkg",
    pubkey : "/usr/local/etc/ssl/certs/mwlucas.cert"
    }

    Two repositories? Good. But can you actually access the repository? A pkg update should pull down the digests for your new repo.

    # pkg update
    Updating repository catalogue
    digests.txz 100% 1928 1.9KB/s 1.9KB/s 00:00
    packagesite.txz 100% 10KB 10.0KB/s 10.0KB/s 00:00
    Incremental update completed, 23 packages processed:
    0 packages updated, 0 removed and 23 added.

    A quick check will verify that the private repo has 23 packages.

    Now for the weakest part of pkgng: actually using your repository for select packages. If I built everything, I’d just disable the FreeBSD repository. But I want to only build packages that differ from the default.

    pkg searches for packages in each repository in order. At the moment, repositories appear in alphabetical order. I could rename my repository so that it appears before FreeBSD. pkg would search my repo for packages, and if they didn’t exist there check the FreeBSD repo.

    This would be ideal. But alphabetical repository ordering is not guaranteed. The only way to ensure the packages install from the correct repository is to tell each individual FreeBSD server where it should get specific packages. This kind of sucks, but it’s still an improvement on pkg_add. (I hear that repository handling should improve in future versions of pkg.)

    Use the -r flag to specify a repository with pkg. For example, let’s search my private repo for an Apache package.

    # pkg search -r mwlucas apache
    apache22-2.2.26

    The package is there. Excellent. Install it.

    # pkg install -r mwlucas apache22
    Updating repository catalogue
    The following 8 packages will be installed:

    Installing expat: 2.1.0 [mwlucas]
    Installing perl5: 5.16.3_7 [mwlucas]
    Installing pcre: 8.34 [mwlucas]
    Installing openldap-client: 2.4.38 [mwlucas]
    Installing gdbm: 1.11 [mwlucas]
    Installing db42: 4.2.52_5 [mwlucas]
    Installing apr: 1.4.8.1.5.3 [mwlucas]
    Installing apache22: 2.2.26 [mwlucas]

    The installation will require 94 MB more space

    19 MB to be downloaded

    Proceed with installing packages [y/N]: y

    Take careful note of this list of dependency packages installed from your repository.

    Here you have to decide how you want to upgrade your server. All packages that you want to upgrade from your repo need to be marked as such. How many of these dependency packages must be upgraded via your repo? That’s a good question. If you took note of which packages you had to change the configuration of, way back in the beginning, you could label only those changed packages as requiring your repo. I didn’t do that. If this was a brand-new machine I would install packages from my repo first and mark everything installed as requiring my repo. I didn’t do that. Instead, I’m going to mark all packages required by apache22 as belonging to my repo, much like this.

    # pkg annotate -Ay apache22 repository mwlucas
    apache22-2.2.26: added annotation tagged: repository
    # pkg annotate -Ay pcre repository mwlucas

    Repeat this for each package installed by this package install.

    The repository tag appears when you run pkg info on a specific package.

    # pkg info pcre | grep repository
    repository : mwlucas

    pkg upgrade will now only use your repository for the specified package.

    Is this long and complicated? Sure. But it beats the snot out of maintaining three separate package repositories under the old package system.