Revoked and Replaced OpenPGP Key

I uploaded a GPG key to subkeys.pgp.net back in 2005. It’s well past time for me to replace it. I covered creating your revocation certificate back in PGP & GPG, but didn’t actually write about using that revocation certificate. Nine years later… yeah, I better figure this out.

So Io to the machine with my keypair, and create my revocation certificate.

# gpg --output oldgpg.revoke.asc --gen-revoke E68C49BC

sec 1024D/E68C49BC 2005-02-21 Michael Warren Lucas Jr (Author, consultant, sysadmin)

Yep, that’s my old key.

Create a revocation certificate for this key? (y/N) y
Please select the reason for the revocation:
0 = No reason specified
1 = Key has been compromised
2 = Key is superseded
3 = Key is no longer used
Q = Cancel
(Probably you want to select 1 here)
Your decision? 2

Why is this key being revoked? Because it’s nine years old. I’ve generated a new key,

Enter an optional description; end it with an empty line:
>
Reason for revocation: Key is superseded
(No description given)
Is this okay? (y/N) y

Nobody cares about the details, so I don’t enter any.

You need a passphrase to unlock the secret key for
user: "Michael Warren Lucas Jr (Author, consultant, sysadmin) "
1024-bit DSA key, ID E68C49BC, created 2005-02-21

I enter my passphrase.

ASCII armored output forced.
Revocation certificate created.

I now have a revocation certificate, oldgpg.revoke.asc. To activate it, I import it into my keyring.

# gpg --import oldgpg.revoke.asc
gpg: key E68C49BC: "Michael Warren Lucas Jr (Author, consultant, sysadmin) " revocation certificate imported
gpg: Total number processed: 1
gpg: new key revocations: 1
gpg: 3 marginal(s) needed, 1 complete(s) needed, classic trust model
gpg: depth: 0 valid: 2 signed: 14 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1 valid: 14 signed: 1 trust: 14-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2020-10-13

No passphrase needed–it just happens.

Now: sleep tight, sweet prince.

# gpg --send-keys E68C49BC
gpg: sending key E68C49BC to hkp server subkeys.pgp.net

My old key is dead.

For the record, my new key is 1F2E54A8, for mwlucas at michaelwlucas dot com.

Now if I could only kill 4EBA9723…

“Storage Essentials” first draft complete!

The first draft of FreeBSD Mastery: Storage Essentials is now complete and available on the Tilted Windmill Press site.

My target for a Mastery book is for it to be about 30K words. That seems a fair length for a $10 technology ebook. FMSE is 45K words, or about 50% more than that. At that price point, it’ll be a bargain. The print version will probably run more than the $20 I prefer, but we’ll see what happens.

As it’s a complete draft, the price has been raised to $8.99. Once the book goes through technical review and I correct it, the price will go to its final $9.99. At that point, I’ll get it into Amazon, B&N, and so on, in both print and epub.

So, what’s next?

Next, I look at my pile of outlines and try to untangle them. I’m planning FreeBSD books on jails, on ZFS, specialty filesystems (which might or might not include network filesystems), and more. These topics are all terribly interrelated. As I’m now writing full time, I need to figure out the approach that makes the best use of my time and yet gives me the maximum amount of exposure to everything.

I still intend to do a small OpenBSD book in the near future, but I’m still debating what that should be. I have high hopes for both OpenHTTPD and LibreSSL, but I want both projects to settle first. And I have a whole list of non-BSD books on my list as well.

There’s also the possibility that the market will reject FMSE. If that happens, it will limit how many more FreeBSD Mastery books I do. I think that won’t happen–I expect the book to do well–but I never know. As I’m depending on books to pay my mortgage, I might have to make the hard decision to cancel the series. We’ll have to wait and see.

I’m changing careers

My employer was just bought by another company. I find myself unemployed. This was not unexpected, so I’ve had time to think about what to do next.

I could have another IT job by three PM by picking up the phone and calling my friend Pam. If Pam was out of town, I’d call half a dozen other people and have a job by noon tomorrow. I’d certainly get a raise over what I’m making now–actually, given that I specifically chose a lower-paying job with less stress, I could double my salary. That’s what monster.com tells me.

But I don’t think I want to do that.

Instead, after talking with my family and taking a hard look at our finances, I’ve decided to write full time.

This is a big pay cut for me. Yes, even from my low-stress low-pay job. It means not going out to eat, hoping the car doesn’t drop a transmission, and mowing my own lawn instead of having Chuck the lawn guy do it for me. But it’s work I’ll far enjoy more than being paged at stupid o’clock because some beancounter decided we didn’t need to replace that faulty power supply. I’ll enjoy it a heck of a lot more than attending yet another pointless staff meeting.

Based on my previous book sales, it appears that I can get my income up to what my recently-departed job paid in about 12-18 months of hard writing. It’ll be a spartan year, but that’s okay.

Any number of things could derail this plan. I might wind up working at a big company in six months, regretting ever calling anyone a beancounter.

Writing for a living means I must figure out how quickly I actually write and coming up with a real production schedule. My books have all been written in one-hour stretches, in a variety of inconvenient locations. I have no idea what my sustained output looks like, especially once I no longer have any threat of a phone call waking me up in the middle of the night. (It doesn’t matter how “low-stress” a job is, a faulty email server that taken ten days to get properly fixed means two weeks without writing.)

Writing for a living means I need to write towards money like a hungry rat gnawing through the brick wall of the butcher shop. My family is supportive, but we do like to go out to eat now and then at a fancy place, like Qdoba. I’m going to try a bunch of different projects and see which take off. I have high hopes for the forthcoming FreeBSD Mastery books, and I have a list of thirty other titles to work on.

Writing for a living means I need to be a lot more consistent about, say, mentioning that I have a tip jar at the bottom of technical blog posts. I must overcome my shame at saying “Hey, if I helped you, give me money.”

This means that if you’re one of the organizations that owe me conference reimbursements, I’m gonna knock on your door with my hand out. I have lots of time to do that now.

This means when my clothes wear out I shop at Salvation Army rather than Costco. That’s okay. Old Sal is more fun, even though you probably don’t want to eat any of the free samples.

Speaking of Costco? Yeah… try the farmer’s market in downtown Detroit instead. What’s in season is cheap.

On the other hand, I only have one full-time job now instead of two. I’ll have free time. I’m looking forward to it. I’ve studied the craft of writing for decades now, and given up a lot of things for it. Why, I hear they rebooted Star Trek a few years ago. I grew up watching that show, and I’d really enjoy catching up with it. I can’t see how they’ll have a bald French guy as captain of the Enterprise, but what the heck, I’ll give it a try.

But am really going to miss the lawn guy.

Shuffling Partitions on FreeBSD

I’ve recently moved my personal web sites to https://www.vultr.com/, using virtual machines instead of real hardware. (I’ve caught up to the 2000s, hurrah!) I didn’t track server utilization, so I provisioned the machines based on a vague gut feeling.

The web server started spewing signal 11s, occasionally taking down the site by killing mysql. Investigation showed that this server didn’t have enough memory. How can 1GB RAM not be enough for WordPress and MySQL? Why, back in my day–

<SLAP>

Right. Sorry about that.

Anyway, I needed to increase the amount of memory. This meant moving up to a larger hosting package, which also expanded my hard drive space. After running gpart recover to move the backup GPT table to the end of the new disk, my new disk was partitioned like so:

# gpart show

=>        34  1342177213  vtbd0  GPT  (640G)
          34         128      1  freebsd-boot  (64K)
         162           6         - free -  (3.0K)
         168   666988544      2  freebsd-ufs  (318G)
   666988712     4096000      3  freebsd-swap  (2.0G)
   671084712   671092535         - free -  (320G)

I have 320 GB of space space at the end of the disk.

The easy thing to do would be to create a new partition in that space. I advocate and recommend partitioning servers. The only reason that this system has one large partition is because that’s what the hosting provider gave me.

I’m writing a book on FreeBSD disk partitioning, however, so this struck me as an opportunity to try something that I need for the book. (As I write this, you can still get FreeBSD Mastery: Storage Essentials at a pre-pub discount.) How would I expand the root partition, with the swap space smack dab in the middle of the disk?

Virtualized systems have no awareness of the underlying disk. Advice like “put the swap partition at the beginning of the disk” becomes irrelevant, as you have no idea where that physically is. On a system like this, how would I use the built-in FreeBSD tools to create a swap partition at the end of the disk, and expand the existing partition to fill the remaining space?

This isn’t as easy as you might think. FreeBSD’s gpart command has no feature to add a partition at a specific offset. But it can be done.

Any time you touch disk format or partitions, you might lose filesystems. Back up your vital files. For a WordPress web server, this is my web directory and the SQL database. (My backup includes a half-complete version of this article. If my repartitioning goes badly, I’ll retitle this piece “How to Not Repartition.” But anyway…) Copy these files off the target server.

Now, what exactly do I want to do?

  • I want 4-5GB of swap space, at the end of the disk. (The server now has 2GB RAM.)
  • I want to remove the current swap space.
  • I want to expand the root partition to fill the remaining space.

    gpart(8) won’t let me say “create a 4GB partition at the end of the disk.” It will let me create a filler partition that I have no intention of actually using, however. As I’m sure the disk is not precisely 320GB, I’m going to play it safe and give myself 5GB of room for this swap partition. I give this partition a label to remind me of its purpose and role.

    # gpart add -t freebsd -s 315GB -l garbage vtbd0
    vtbd0s4 added

    The partitioning now looks like this.

    =>        34  1342177213  vtbd0  GPT  (640G)
              34         128      1  freebsd-boot  (64K)
             162           6         - free -  (3.0K)
             168   666988544      2  freebsd-ufs  (318G)
       666988712     4096000      3  freebsd-swap  (2.0G)
       671084712   660602880      4  freebsd  (315G)
      1331687592    10489655         - free -  (5.0G)

    Now I can add a swap partition at the end of the disk.

    # gpart add -t freebsd-swap -l swap0 vtbd0
    vtbd0p5 added

    The resulting partitioning looks like this.

    # gpart show

    =>        34  1342177213  vtbd0  GPT  (640G)
              34         128      1  freebsd-boot  (64K)
             162           6         - free -  (3.0K)
             168   666988544      2  freebsd-ufs  (318G)
       666988712     4096000      3  freebsd-swap  (2.0G)
       671084712   660602880      4  freebsd  (315G)
      1331687592    10489655      5  freebsd-swap  (5.0G)

    Tell /etc/fstab about the new swap partition, and remove the old one.

    /dev/gpt/swap0 none swap sw 0 0

    (In looking at the old entry, I realized that Vultr uses glabel(8) labels, where I use gpart(8) labels. Either type is fine, but I need to remember that /dev/label/swap0 is the old swap partition, and /dev/gpt/swap0 is the new one.)

    Activate the new swap space. I could reboot, but why bother?

    # swapon /dev/gpt/swap0

    My swap now looks like this.

    # swapinfo -h

    Device          1K-blocks     Used    Avail Capacity
    /dev/label/swap0   2047996       0B     2.0G     0%
    /dev/gpt/swap0    5244824       0B     5.0G     0%
    Total             7292820       0B     7.0G     0%

    Turn off the old swap.

    # swapoff /dev/label/swap0

    The old swap is unused. I can put it out of my misery. Double-check gpart show to learn which partition is your swap space (3) and your temporary placeholder (4). Double double-check these numbers. We’re going to remove these partitions. If you delete your data partition due to your own stupidity you will be… unhappy.

    # gpart delete -i 3 vtbd0
    vtbd0p3 deleted
    # gpart delete -i 4 vtbd0
    vtbd0s4 deleted

    Triple double-check: do you still have a root filesystem? (Yes, FreeBSD has safeguards to prevent you from deleting mounted partitions. Check anyway.)

    # gpart show

    =>        34  1342177213  vtbd0  GPT  (640G)
              34         128      1  freebsd-boot  (64K)
             162           6         - free -  (3.0K)
             168   666988544      2  freebsd-ufs  (318G)
       666988712   664698880         - free -  (317G)
      1331687592    10489655      5  freebsd-swap  (5.0G)

    Our swap space is at the end of the disk. And we have 317GB of free space right next to our root filesystem. You have not ruined your day. Yet.

    Double-check your backups. Do you really have everything you need to recreate this server? If so, expand the root filesystem with gpart resize. Don’t specify a size, and the new partition will fill all available contiguous space.

    # gpart resize -i 2 vtbd0
    vtbd0p2 resized
    # gpart show

    =>        34  1342177213  vtbd0  GPT  (640G)
              34         128      1  freebsd-boot  (64K)
             162           6         - free -  (3.0K)
             168  1331687424      2  freebsd-ufs  (635G)
      1331687592    10489655      5  freebsd-swap  (5.0G)

    Now I have a 318GB filesystem on a 636GB partition. Let’s expand that filesystem to fill the partition. You can’t resize a filesystem label such as /dev/label/root0, you must use a partition identifier like vtbd0p2 or /dev/gpt/rootfs0. In FreeBSD 10, you can use growfs on mounted file systems.

    # growfs /dev/vtbd0p2
    It's strongly recommended to make a backup before growing the file system.
    OK to grow filesystem on /dev/vtbd0p2 from 318GB to 635GB? [Yes/No] yes
    growfs: /dev/vtbd0p2: Operation not permitted

    Not permitted? I activated GEOM debugging mode by setting kern.geom.debugflags to 0x10, but was still denied. I’ve grown mounted filesystems before, so what the heck?

    This virtual server has disabled soft updates, journaling, and all the fancy FreeBSD disk performance features. I suspect this error is tied to that. Let’s go to single user mode and grow the filesystem unmounted.

    I reboot, and get:

    Mounting from ufs:/dev/label/rootfs0 failed with error 19.

    Even when you know what’s wrong, this message makes that little voice in the back of my skull simultaneously call me an idiot and scream “You destroyed your filesystem! Ha ha!” Plus, I can no longer make notes in my web browser–the article is on the non-running server.

    Fortunately, I know which partition is the root partition. I enter

    mountroot> ufs:/dev/vtbd0p2

    and get the familiar single-user-mode prompt. Now I can do:

    # growfs vtbd0p2

    I answer yes, and new superblocks scroll across the screen. The filesystem grows to fill all available contiguous space.

    My suspicion is that resizing the partition destroyed the label. Many GEOM classes store information in the last sector of the partition. Let’s use a GPT label instead.

    # gpart modify -i 2 -l rootfs vtbd0

    Mount the root filesystem read-write.

    # mount -o rw /dev/vtbd0p2 /

    Create a new /etc/fstab entry for the root filesystem, using the GPT label instead of the glabel(8) one.

    /dev/gpt/rootfs / ufs rw,noatime 1 1

    And then a reboot to see if everything comes back. It does.

    My partitions now look like this:

    # df -h

    Filesystem         Size    Used   Avail Capacity  Mounted on
    /dev/gpt/rootfs    615G    6.1G    560G     1%    /
    devfs              1.0K    1.0K      0B   100%    /dev
    devfs              1.0K    1.0K      0B   100%    /var/named/dev

    All installed disk space is now in use. Mission accomplished!

    Having written this, though, I have no chance of forgetting that I need to go back and do a custom install to partition the server properly.

  • Phabricator on FreeBSD installation notes

    BSDs generally break their PHP packages into smaller units than most Linux distribution. This means that you need extra packages when following installation guides. I’m installing Phabricator on FreeBSD because I want ZFS under it.

    This is the complete list of PHP modules and related stuff I needed to install to get Phabricator to run on FreeBSD 10.0p7/amd64. Don’t use PHP 5.5, as some modules are only available with PHP 5.4.

    php5
    mod_php5
    php5-curl
    php5-gd
    php5-iconv
    php5-mbstring
    php5-mysql
    php5-pcntl
    pecl-APC
    php5-filter
    pear-Services_JSON
    php5-json
    php5-hash
    php5-openssl
    php5-ctype
    php5-posix
    php5-fileinfo

    Restart your web server after installing everything.

    Phabricator wants a lot of control over its database. I don’t like giving web applications root privileges on a database. this article by David Antaramian was quite helpful there.

    Once you have your user set up, initialize the Phabricator database by running

    # ./storage upgrade --user root --password MyRootPassword

    This gives the script the access needed to actually create and adjust Phabricator databases.

    After that, the Phabricator installer seems to do a good job of walking you through fixing the various setup niggles.

    New autobiography chapter: The Thumbs

    Lots of people are sad today, it seemed a good time to put this up. And as I got yelled at by people the last time I didn’t mention this on the blog:

    There’s a new autobiography chapter up.

    Why am I writing an autobiography? I’m not. But these are stories I tell over and over again, so I put them up in a central place.

    FreeBSD ZFS snapshots with zfstools

    In my recent survey of ZFS snapshot automation tools, I short-listed zfstools and zfsnap. I’ll try both, but first we’ll cover FreeBSD ZFS snapshots with zfstools. Zfstools includes a script for creating snapshots, another for removing old snapshots, and one for snapshotting MySQL databases. The configuration uses only ZFS attributes and command line arguments via cron.

    Start by deciding which ZFS filesystems you want to snapshot. The purpose of snapshotting is to let you get older versions of a filesystem, so you can either roll back the entire filesystem or grab an older version of a file. I’m a fan of partitioning, and typically use many partitions on a ZFS system. I use separate ZFS for everything from /usr/ports/packages to /var/empty.

    So, which of these partitions won’t need snapshots? I don’t snapshot the following, either because I don’t care about older versions or because the contents are easily replicable. (I would snapshot some of these on, say, my package-building machine.)

    /tmp
    /usr/obj
    /usr/src
    /usr/ports
    /usr/ports/distfiles
    /usr/ports/packages
    /var/crash
    /var/empty
    /var/run
    /var/tmp

    Zfstools uses the ZFS property com.sun:auto-snapshot to determine if it should snapshot a filesystem. On a new system, this property should be totally unset, like so:


    # zfs get com.sun:auto-snapshot
    ...
    zroot/var/tmp com.sun:auto-snapshot - -
    ...

    Set this property to false for datasets you want zfstools to never snapshot.

    # zfs set com.sun:auto-snapshot=false zroot/var/empty

    You should now see the attribute set:


    # zfs get com.sun:auto-snapshot zroot/var/empty
    NAME PROPERTY VALUE SOURCE
    zroot/var/empty com.sun:auto-snapshot false local

    Set this attribute for every filesystem you don’t want to snapshot, then activate snapshotting on the entire zpool.

    # zfs set com.sun:auto-snapshot=true zroot

    The other filesystems will inherit this property from their parent zpool.

    Now to activate snapshots. Zfstool’s zfs-auto-snapshot tool expects to run out of cron. It requires two arguments: the name of the snapshot, and how many of that snapshot to keep. So, to create a snapshot named “15min” and retain 4 of them, you would run

    # zfs-auto-snapshot 15min 4

    Zfstools lets you name your snapshot anything you want. You can call your hourly snapshots LucasIsADweeb if you like. Other snapshot tools are not so flexible.

    The sample cron file included suggests retaining 4 15 minute snapshots, 24 hourly snapshots, 7 daily snapshots, 4 weekly snapshots, and 12 monthly snapshots. That’s a decent place to start, so give root the following cron entries:

    PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
    15,30,45 * * * * /usr/local/sbin/zfs-auto-snapshot 15min     4
    0        * * * * /usr/local/sbin/zfs-auto-snapshot hourly   24
    7        0 * * * /usr/local/sbin/zfs-auto-snapshot daily     7
    14       0 * * 7 /usr/local/sbin/zfs-auto-snapshot weekly    4
    28       0 1 * * /usr/local/sbin/zfs-auto-snapshot monthly  12
    

    (The zfstools instructions call the 15-minute snapshots “frequent.” I’m choosing to use a less ambiguous name.)

    One important thing to note is that zfstools is written in Ruby, and each script starts with an environment call to find the ruby interpreter. You must set $PATH in your crontab.

    Now watch /var/log/cron for error messages. If zfs-auto-snapshot runs correctly, you’ll start to see snapshots like these:

    # zfs list -t snapshot

    NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
    zroot/ROOT/default@zfs-auto-snap_monthly-2014-08-01-00h28       0      -   450M  -
    zroot/ROOT/default@zfs-auto-snap_weekly-2014-08-03-00h14        0      -   450M  -
    zroot/ROOT/default@zfs-auto-snap_daily-2014-08-06-00h07         0      -   450M  -
    zroot/ROOT/default@zfs-auto-snap_hourly-2014-08-06-11h00        0      -   450M  -
    zroot/ROOT/default@zfs-auto-snap_15min-2014-08-06-11h30         0      -   450M  -
    zroot/home@zfs-auto-snap_hourly-2014-07-31-16h00              84K      -   460K  -
    ...
    

    Each snapshot is unambiguously named after the snapshot tool, the snapshot name, and the time the snapshot was made.

    Snapshots consume an amount of disk space related to the amount of churn on the filesystem. The root filesystem on this machine doesn’t change, so the snapshots are tiny. Filesystems with a lot of churn will generate much larger snapshots. I would normally recommend not snapshotting these filesystems, but you can at least exclude the 15-minute snapshots by setting a property.

    # zfs set com.sun:auto-snapshot:15min=false zroot/var/churn

    Note that this won’t show up when you do a zfs get com.sun:auto-snapshot. You must specifically check for this exact property. To see all snapshot settings, I would up using zfs get all and grep(1).

    # zfs get -t filesystem all | grep auto-snapshot

    zroot/ROOT/default  com.sun:auto-snapshot:frequent  false    local
    zroot/ROOT/default  com.sun:auto-snapshot           true     inherited from zroot
    zroot/ROOT/default  sun.com:auto-snapshot           true     inherited from zroot
    

    If you disable a particular interval’s snapshots on a filesystem with existing snapshots, the older snapshots will gradually rotate away. That is, if you kept 4 15-minute snapshots, when you disable those 15-minute snapshots the old snapshots will disappear over the next hour.

    Overall, zfstools works exactly as advertised. It creates new snapshots as scheduled, destroys old snapshots, and has very fine-grained control over what you’ll snapshot and when. The use of ZFS attributes to control which filesystems get snapshotted under which conditions doesn’t thrill me, but I’m biased towards configuration files and reading this configuration is really no worse than a config file.

    I’ll try zfsnap next.

    a survey of FreeBSD ZFS snapshot automation tools

    Why automatically snapshot filesystems? Because snapshots let you magically fall back to older versions of files and even the operating system. Taking a manual snapshot before a system upgrade is laudable, but you need to easily recover files when everything goes bad. So I surveyed my Twitter followers to see what FreeBSD ZFS snapshot automation tools they use.

    The tools:

  • A few people use custom shell scripts of varying reliability and flexibility. I’m not going to write my own shell script. The people who write canned snapshot rotation tools have solved this problem, and I have no desire to re-solve it myself.
  • One popular choice was sysutils/zfs-snapshot-mgmt. This lets you create snapshots as often as once per minute, and retain them as long as you desire. Once a minute is a bit much for me. You can group snapshot creation and deletion pretty much arbitrarily, letting you keep, say, 867 per-minute snapshots, 22 every-seven-minute snapshots, and 13 monthlies, if that’s what you need. This is the Swiss army knife of ZFS snapshot tools. One possible complication with zfs-snapshot-mgmt is that it is written in Ruby and configured in YAML. If you haven’t seen YAML yet, you will–it’s an increasingly popular configuration syntax. My existing automation is all in shell and Perl, however. I added Python for Ansible. Adding yet another interpreter to all of my ZFS systems doesn’t thrill me. Ruby is not a show-stopper, but it doesn’t thrill me. The FreeBSD port is outdated, however–the web site referenced by the port says that the newest code, with bug fixes, is on github. If you’re looking for a FreeBSD porting project, this would be an easy one.
  • The zfs-periodic web page is down. NEC Energy Solutions owns the domain, so I’m guessing that the big corporate overlord claimed the blog and the site isn’t not coming back. The code still lives at various mirrors, however. zfs-periodic is tightly integrated with FreeBSD’s periodic system, and can automatically create and delete hourly, daily, monthly, and weekly snapshots. It appears to be the least flexible of the snapshot systems, as it runs with periodic. If you want to take your snapshots at a time that periodic doesn’t run, too bad. I don’t get a very good feeling from zfs-periodic–if the code had an owner, it would have a web site somewhere.
  • sysutils/zfsnap can do hourly, daily, weekly, and monthly snapshots. It’s designed to run from periodic(8) or cron(8), and is written in /bin/sh.
  • sysutils/zfstools includes a clone of OpenSolaris’ automatic snapshotting tools. I no longer run OpenSolaris-based systems, except on legacy servers that I’m slowly removing, but I never know what the future holds around the dayjob. (I’m waiting for the mission-critical Xenix deployment, I’m sure it’s not far off.) This looks highly flexible, being configured by a combination of cron scripts and ZFS attributes, and can snapshot every 15 minutes, hour, day, week, and month. It’s written in Ruby (yet another scripting language on my system? Oh, joy. Joy and rapture.) On the plus side, the author of zfstools is also a FreeBSD committer, so I can expect him to keep the port up to date.

    In doing this survey I also came across sysutils/freebsd-snapshot, a tool for automatically scheduling and automounting UFS snapshots. While I’m not interested in UFS snapshots right now, this is certainly worth remembering.

    My choice?

    So, which ones will I try? I want a tool that’s still supported and has some flexibility. I want a FreeBSD-provided package of the current version of the software. I’m biased against adding another scripting language to my systems, but that’s not veto-worthy.

    If I want compatibility with OpenSolaris, I’ll use zfstools. I get another scripting language, yay!

    If I don’t care about OpenSolaris-derived systems, zfsnap is the apparent winner.

    Of course, I won’t know which is better until I try both… which will be the topic of a couple more blogs.

    UPDATE, 07-31-2014: I screwed up my research on zfsnap. I have rewritten that part of the article, and my conclusions. My apologies — that’s what happens when you try to do research after four hours sleep. Thanks to Erwin Lansing for pointing it out.

    (“Gee, I’m exhausted. Better not touch any systems today. What shall I do? I know, research and a blog post!” Sheesh.)

  • Google Play notes

    A couple months ago, I put my Tilted Windmill Press books up on Google Play. I firmly believe that having your books widely available is a good thing. Google Play let me be DRM-free, and while their discounting system is a pain to work around, I’d like people to be able to get my books easily. I’ve sold six books through Google Play, which isn’t great but hey, it’s six readers I wouldn’t have otherwise.

    Amazon is overwhelmingly my biggest reseller. I get over 90% of my self-publishing income from them. They provide truly impressive analytical tools. While sites like Smashwords provide you with spreadsheets that you can dump into whatever analytics tools you want, Amazon gives you the spreadsheets and a bunch of graphs and charts and other cool stuff.

    This made it really obvious that a day after my books went live on Google Play, my Amazon sales plummeted by about a third and have remained there.

    This is weird. And I really would like my sales back up where they were.

    I can think of lots of explanations, most of them involving computer algorithms. No conspiracy is required here. I’m certain Amazon didn’t de-prioritize my books just because they’re available on Google Play. Book sales fluctuate naturally, and there usually is a dip during the summer. But the graphs (both Amazon’s and my own) makes it really clear that this is an unusual slump.

    As an experiment, I’ve disabled my books in Google Play. People who bought the book will still have access to it, but nobody can purchase it now.

    If my Amazon sales recover, the Google Play store will remain off. The few Play sales don’t make up for the lost Amazon sales.

    I will report back on the results. But, if you’re wondering where my Google Play store went, the answer is: away.