SolusVM KVM offline migration with shared storage

I’m building a new virtualization cloud with SolusVM, KVM, and a bit of Xen (to make use of older hardware). Each machine has its own hard disk, but it only holds the local operating system. All virtual machines reside on cheap iSCSI storage, so I can easily migrate VMs from one compute node to another. The goal being, of course, to separate service failures from hardware failures. (I still have to deal with possible storage failures, of course, but hot-swap hard drive arrays reduce my risk somewhat.)

SolusVM provides a nice front end to the whole Linux virtualization tangle. It does exactly what it claims, and at a reasonable price. I’m happy to pay someone a couple bucks a year per physical server to give me a non-sucky cloud front end that Just Works. One feature that it lacks is live migration for KVM and Xen hosts. Live VM failover is nice, but not essential for my purposes. As part of our Redundant Array of Inexpensive Crap strategy, I cluster VMs as well as physical servers: multiple mail servers, multiple DNS servers, and so on.

While there’s documentation on how to cold-migrate Xen VMs, there’s no documentation on how to migrate a KVM VM from one node to another, however. Let alone how to do this with shared storage. But the forum says that the Xen method should work with KVM. Let’s try it and see what happens!

The Xen page talks about replicating the LVM container on the new node. With shared storage, you can skip this step; I defined my SolusVM groups based on the iSCSI device they’re attached to. I imagine the same migration process would work with unshared storage, if you duplicated the disk data first.

Go into the SolusVM GUI and note the VM number and the node number. For my test, want to move VM 2 onto node 4. Log onto the master server, become root, and run:

# cd /scripts
# ./vm-migrate 2 4
Virtual server information updated!
#

I then tried to start the VM via the GUI, and it wouldn’t boot. Log onto the compute node to find out why. Any time I have a virtualization problem involving multiple pieces of hardware, I check /var/log/libvirt/libvirtd.log. Starting the virtual machine generated this log message:

14:36:13.417: 1443: error : qemuMonitorOpenUnix:290 : failed to connect to monitor socket: No such process
14:36:13.417: 1443: error : qemuProcessWaitForMonitor:1289 : internal error process exited while connecting to monitor: inet_listen_opts: bind(ipv4,0.0.0.0,5901): Address already in use
inet_listen_opts: FAILED

The KVM instance could not use port 5901, because something else was using it. KVM uses VNC to offer console access, and attaches to a port above 5900. Machine number one’s console is on VNC on port 5901, machine number two on port 5902, and so on.

The migrate-vm script didn’t change the console port. I went into the VM entry, changed the port by hand, and brought up the machine without trouble. Annoying, but not insurmountable.

Hopefully this helps the next sysadmin searching for this topic.

Remote Web Browsing via OpenSSH and PuTTY

I’m installing SolusVM as a virtualization management system. It lets you manage your private cloud via a Web browser, set up resellers, and so on. When you first log in, the administrative interface locks itself down so that you can only log in from one IP address, in a sort of implicit whitelist. You must explicitly add other addresses. That’s fine, even reasonable. I had three address ranges to add: my office, the headquarters, and Fearless Leader’s office. So I went into the management interface and explicitly added the headquarters’ addresses.

And I was locked out of the management interface. Apparently the explicit whitelist permitting HQ overwrote the implicit whitelist permitting my workstation.

I could have opened a ticket with SolusVM and admitted that I’d ignorantly locked myself out. But I don’t like interacting with vendors. I could have driven into the office, but that would involve changing out of my bathrobe. That left logging into the management workstation via a web browser from headquarters. I’m not going to talk one of my coworkers through it if I can avoid it.

Instead, I used SSH dynamic forwarding to connect to the SolusVM head node from an IP address at headquarters.

You can do this with an OpenSSH server and either a PuTTY or OpenSSH client. I chose to use PuTTY because that was the computer on the couch with me. I have several OpenSSH servers at headquarters.

Open a new PuTTY session. Enter the host, username, and server port as normal. Before opening the session, go to the left-hand side of the screen and select SSH -> Tunnels. Enter a “Source port” of 9999. Near the bottom, select “Dynamic.” Now open your SSH connection.

You now have a SOCKS proxy running on your computer. All traffic sent to port 9999 is sent over your SSH session. Your SSH server connects you to the Internet.

Go to your Web browser’s connection settings. In firefox, it’s Tools->Options->Advanced. Select the Network tab, then Settings. Select Manual proxy settings, then enter a SOCKS host of 127.0.0.1 port 9999. Select the SOCKS5 button. Exit the menus, hitting OK all the way back.

Now your Web browser connects to the Internet via the SOCKS proxy running on your computer. You’re browsing the Web from the IP address of your SSH server.

This is much faster than remote browsing options such as Remote Desktop or forwarding X11 over SSH. And it let me log into my SolusVM console without having to communicate with another human being, so everybody wins.

Of course, you could learn about this sort of trick and more in my new SSH book.

Installing WHMCS on FreeBSD 9.0-RELEASE

Or, if you prefer: “WHMCS versus PHP.” Blogged for the next sysadmin searching Google.

$DAYJOB recently acquired WHMCS to help automate virtual server provisioning, billing, and so on. According to everything I’ve read, WHMCS runs just fine on FreeBSD, so I installed the prerequisites on a 9.0-i386 machine. As with any server for PHP-based Web sites, I verified that the server processed PHP with a simple phpinfo() page. I then grabbed the WHMCS tarball (no link, you must be a customer, sorry), extracted it into the directory, ran the setup program, fed in the database information and license key…

..and it wouldn’t run. Calling up the app resulted in a blank page. WHMCS provides troubleshooting instructions for this exact circumstance. I enabled the requested debugging, but couldn’t get WHMCS to produce an error. Adding a bogus argument to my phpinfo() test page made an error appear, so I was confident the failure to display an error message wasn’t a server configuration problem.

This comes down to FreeBSD’s PHP packaging.

When you install PHP on the popular varieties of Linux, you generally get a whole slew of PHP extensions with it. BSD-based systems only install exactly what you ask for: if you want PHP but don’t request any extensions, you won’t get any extensions.

I agree with this approach. Every piece of installed software needs patching and updating. Every piece of installed software is a potential attack vector. If I don’t need a piece of software, I don’t want it on my server.

WHMCS doesn’t list all of the required extensions. They assume you have a kitchen-sink PHP install. After some reading and research, I found that WHMCS runs fine with the following PHP modules and extensions installed. I’ve included the version numbers for reference, but you should be able to just pkg_add -r all of these by name.

php5-5.3.8
php5-bz2-5.3.8
php5-ctype-5.3.8
php5-curl-5.3.8
php5-dom-5.3.8
php5-extensions-1.6
php5-filter-5.3.8
php5-gd-5.3.8
php5-hash-5.3.8
php5-iconv-5.3.8
php5-json-5.3.8
php5-ldap-5.3.8
php5-mysql-5.3.8
php5-openssl-5.3.8
php5-pdo-5.3.8
php5-pdo_sqlite-5.3.8
php5-phar-5.3.8
php5-posix-5.3.8
php5-session-5.3.8
php5-simplexml-5.3.8
php5-tokenizer-5.3.8
php5-xml-5.3.8
php5-xmlreader-5.3.8
php5-xmlwriter-5.3.8
php5-zip-5.3.8
php5-zlib-5.3.8

I’d really like to trim this down to only what is strictly necessary to run WHMCS, but that information doesn’t seem to be available. I could methodically remove and reinstall extensions to see when WHMCS breaks, but I have better things to do than debug missing docs for a commercial PHP app.

On the plus side: now that WHMCS is installed, it’s really slick. I’m looking forward to using it. Actually, I’m looking forward to having other people use it for me, so I can do more interesting things than provision servers, accounts, and billing.

Basic DNSSEC with BIND 9.9

Everybody knows that DNS is insecure, and DNS Security Extensions (DNSSEC) is supposed to fix that. I know that several of my readers consider DNSSEC suboptimal, but it’s the standard, so we get to live with it. I recently got DNSSEC working on BIND 9.9. As I write this 9.9 is in Release Candidate state, but the functionality should be basically unchanged. My goals for DNSSEC on BIND were to manually edit my zone files, but have the DNS server maintain the keys. BIND 9.9 makes this possible.

This is a limited example of how to get basic DNSSEC working. To use it, your registrar must support DNSSEC. There’s ways around this, such as DLV, but they’re out of scope for this document. Also note that I’m not covering key rotation. That’ll be a future post.

You also must have a domain whose parent is signed. The root zone, .com, .net, and .org are all signed, but not all top-level domains are signed. Verify your particular TLD before proceeding. Again, you can use DLV for these orphaned domains, but that’s out of scope for this document.

I’d also suggest that you read the BIND 9.9 ARM first. But if you were going to bother to do that, you wouldn’t have done the Google search to find this article.

You will almost certainly have service interruptions as you learn DNSSEC. I strongly recommend that you set up a test server for your DNSSEC testing. Move a test domain to it. You cannot test DNSSEC on a private domain; it must be a real, Internet-facing domain. Configure dnssec validation on this test server.

You also need a server that provides DNSSEC resolution, but will not be authoritative for your test domain. I’m assuming that you configure DNSSEC resolution on your production server. If you only have one DNS server, you can use an offsite public resolver such as unbound.odvr.dns-oarc.net. (Note that Google DNS, like most public DNS servers, does not validate DNSSEC.)

Verify that DNSSEC resolution works on both servers with dig(1).

$ dig www.isc.org +dnssec

; <<>> DiG 9.8.1-P1 <<>> www.isc.org +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28734 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 13

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;www.isc.org. IN A

;; ANSWER SECTION:
www.isc.org. 600 IN A 149.20.64.42
www.isc.org. 600 IN RRSIG A 5 3 600 20120305233238 20120204233238 21693 isc.org. IKekIJVV99bkTYw4L2KG/xZpQ+BYlCK0IDSsWXKZRD8ceR/VNcfNFxV2 5VK51Fqmy...
...

Two interesting things here. First, the ad flag indicates that this is “authenticated data,” also known as DNSSEC-validated. Second, the RRSIG (Resource Record Signature) is the actual DNSSEC signature. isc.org is DNSSEC-validated.

DNSSEC generates a lot of key files. You don’t edit these key files by hand, and you rarely look at their contents, so use a separate directory. If you have a lot of zones, you’ll want a separate directory for each zone.

You’ll need a directory for keys.

$ mkdir /etc/namedb/keys

To start you need a “master” key to sign other keys with (the Key Signing Key, or KSK), and then a key for each zone (the Zone Signing Key, or ZSK). Your nameserver must be able to read these keys.

# dnssec-keygen -f KSK -a RSASHA1 -b 2048 -n ZONE example.net
Generating key pair…………………………………………………………………………………………………………………………………………….+++ ……………………..+++
Kexample.net.+005+38287
# dnssec-keygen -a RSASHA1 -b 2048 -n ZONE example.net
Generating key pair………….+++ ..+++
Kexample.net.+005+55896
# chown bind:bind *

We’ve generated two key files: Kexample.net.+005+38287 (the KSK) and Kexample.net.+005+55896 (the ZSK).

Now that you have keys, let’s look at configuring named and the zone itself.

I recommend you enable DNSSEC logging in named.conf. If you have trouble, the DNSSEC log will identify the problem. (Actually Understanding the log is left as an exercise for the reader, the ARM, and their favorite search engine.) Make a separate directory for the log.

# mkdir /etc/namedb/log
# chown bind:bind /etc/namedb/log

Then add a logging stanza in named.conf. With this configuration, the log file will never grow larger than 20Mb.

logging {
channel dnssec_log {
file "log/dnssec" size 20m;
print-time yes;
print-category yes;
print-severity yes;
severity debug 3;
};
category dnssec {
dnssec_log;
};
};

Now set up a zone. Here I add DNSSEC data to my test domain.

zone example.net {
type master;
file "master/example.net";
key-directory "keys/";
inline-signing yes;
auto-dnssec maintain;
};

Reload your nameserver. You’ll now see the following files in the zone file directory:

example.net
example.net.jbk
example.net.signed
example.net.jnl

Inline signing works by taking the zone file you manually maintain, transforming it into a dynamic zone, and signing the dynamic zone. DNSSEC changes are made to the journal file. As a result of this, the serial number shown to the world can differ from the serial number in your file. That’s a minor change that I’m perfectly happy to live with.

You should now see RRSIG records in your test zone. You will not see the AD flag, however. You never see an AD flag for a zone on its authoritative nameserver.

So, how do you test DNSSEC on your domain? You might try your second nameserver. It won’t show the AD flag either, but it should also show the RRSIG records.

DNSSEC works via a chain of digital signatures. The root zone is signed, and your server knows about that signature. Most delegations beneath root are also signed. Your parent zone doesn’t know to trust your KSK until you tell it. This is where your registrar comes in. Create a delegation signature key (DSKEY) from your KSK.

# dnssec-dsfromkey Kexample.net.+005+38287
example.net. IN DS 38287 5 1 E8C01C990ACC8CEDF48379EDF9EDAB5389A9CB4E
example.net. IN DS 38287 5 2 57EC9364CEAE50B17C0C251950B4E5B8870F6A479A94C3A92359A623 39703D53

Copy these two lines and paste them into your registrar’s DSKEY interface. Your registrar might take one or both types of DSKEY records. I found that GoDaddy took both, but I had to remove the space from the SHA-256 (second) record.

When your registrar updates the TLD’s zone, DNS servers that are not your authoritative zone will return the DS flag. You’ll have functioning DNSSEC.

(Thanks to Jeffry A. Spain for his invaluable hints in debugging my first DNSSEC setup.)

SSH Mastery Review from Peter Hansteen

Peter has already read and reviewed SSH Mastery. While a few of my readers have been kind enough to post reviews on Amazon and Smashwords (which I very deeply appreciate), Peter’s is the first long review.

And here I should confess something: The very existence of SSH Mastery is Peter’s fault.

Peter will be doing the tech review of Absolute OpenBSD 2nd Edition. He looked over the outline and said “You need more SSH in here. You need SSH here, and here. More SSH love!” So, I listened to him. The SSH content overflowed the OpenBSD book from a planned 350K words to closer to 400K. I can’t comfortably read a 400K-word book. So, something had to give. And it was SSH.

And to again answer what people keep emailing me and asking: yes, a print version is coming. Yes, I am writing AO2e. When I have dates, I will announce them.

SSH Mastery available at Smashwords

To my surprise, SSH Mastery is available at Smashwords.

I don’t know if this version will make it through to Kobo and iBooks, but you can buy it now. If I have to update it to get the book through the Smashwords Meatgrinder and into third-party stores, you’d get access to those later versions as well.

SSH Mastery ebook uploaded to Amazon and B&N

I just finished uploading the ebook versions of SSH Mastery to Amazon and Barnes & Noble. The manuscript is en route to the print layout person.

Amazon should have the book available in 24 hours or so, Barnes & Noble in 24-72 hours. Once they’re available, I’ll be able to inspect the ebooks to check for really egregious errors. The files were clean when I uploaded them, but both companies perform their own manipulation on what I feed them. There’s no way to be sure the books come out okay until I can see the final product.

What about, say, iBooks? Kobo? The short answer is: they’re coming. The long answer is: those sites are fed via Smashwords. Smashwords only accepts Microsoft Word files, and they have very strict controls on how books can be formatted. Their ebook processor, Meatgrinder, isn’t exactly friendly to highly-formatted books. I must spend some quality quantity time getting the book into Smashwords.

I’ll post again when the books are available on each site. In the meantime, I’m going to go put my feet up.

enable DNSSec resolution on BIND 9.8.1

With BIND 9.8, enabling DNSSec resolution and verification is now so simple and low-impact there’s absolutely no reason to not do it. Ignore the complicated tutorials filling the Internet. DNSSec is very easy on recursive servers.

DNS is the weak link in Internet security. Someone who can forge DNS entries in your server can use that to leverage his way further into your systems. DNSSec (mostly) solves this problem. Deploying DNSSec on your own domains is still fairly complicated, but telling a BIND DNS server to check for the presence of DNSSec is now simple.

In BIND 9.8.1 and newer (included with FreeBSD 9 and available for dang near everything else), add the following entries to your named.conf file.

options {
...
dnssec-enable yes;
dnssec-validation auto;
...
};

This configuration uses the predefined trust anchor for the root zone, which is what most of us should use.

Restart named. You’re done. If a domain is protected with DNSSec, your DNS server will reject forged entries.

To test everything at once, configure your desktop to use your newly DNSSec-aware resolver and browse to http://test.dnssec-or-not.org/. This gives you a simple yes or no answer. Verified DNSSec is indicated in dig(1) output by the presence of the ad (authenticated data) flag.

For the new year, add two lines to your named.conf today. Get all the DNSSec protection you can. Later, I’ll discuss adding DNSSec to authoritative domains.

sudo auth via ssh-agent

One of the nicest things about writing a book is that your tech reviewers tell you completely new but cool stuff about your topic. While I was writing the OpenSSH book, one of the more advanced reviewers mentioned that you could use your SSH agent as an authentication source for sudo via pam_ssh_agent_auth.

I have dozens of servers. They all have a central password provider (LDAP). They’re all secured, but I can’t guarantee that a script kiddie cannot crack them. This means I can’t truly trust my trusted servers. I really want to reduce how often I send my password onto a server. But I also need to require additional authentication for superuser activities, so using NOPASSWD in sudoers isn’t a real solution. By passing the sudo authentication back to my SSH agent, I reduce the number of times I must give my password to my hopefully-but-not-100%-certain-secure servers. I can also disable password access to sudo, so that even if someone steals my password, they can’t use it. (Yes, someone could possibly hijack my SSH agent socket, but that requires a level of skill beyond most script kiddies and raises the skill required for APT.)

My sample platform is FreeBSD-9/i386, but this should work on any OS that supports PAM. OpenBSD doesn’t, but other BSDs and most Linuxes do.

pam_ssh_agent_auth is in security/pam_ssh_agent_auth in ports and pkgsrc. There are no build-time configuration knobs and no dependencies, so I used the package.

While that installs, look at your sudoers file. sudo defaults to purging your environment variables, but if you’re going to use your SSH agent with sudo, you must retain $SSH_AUTH_SOCK. I find it’s useful to retain a few other SSH environment variables, for sftp if nothing else.

Newer versions of sudo cache the fact that you’ve recently entered your password, and let you run multiple sudo commands in quick succession without entering your password. This behavior is fine in most environments if you’re actually typing your password, but as sudo will now query a piece of software for your authentication credentials, this behavior is unnecessary. (Also, this caching will drive you totally bonkers when you’re trying to verify and debug your configuration.) Disable this with the timestamp_timeout option.

To permit the SSH environment and set the timestamp timeout, add the following line to sudoers:

Defaults env_keep += "SSH_AUTH_SOCK",timestamp_timeout=0

You can add other environment variables, of course, so this won’t conflict with my earlier post on sftp versus sudo.

Now tell sudo to use the new module, via PAM. Find sudo’s PAM configuration: on FreeBSD, it’s /usr/local/etc/pam.d/sudo. Here’s my sudo PAM configuration:

auth sufficient /usr/local/lib/pam_ssh_agent_auth.so file=~/.ssh/authorized_keys
auth required pam_deny.so
account include system
session required pam_permit.so

By default, sudo uses the system authentication. I removed that. I also removed the password management entry. Instead, I first try to authenticate via pam_ssh_agent_auth.so. If that succeeds, sudo works. If not, the auth attempt fails.

Now try it. Fire up your SSH agent and load your key. SSH to the server with agent forwarding (-A), then ask sudo what you may run.

$ sudo -l
Matching Defaults entries for mwlucas on this host:
env_keep+="SSH_CLIENT SSH_CONNECTION SSH_TTY SSH_AUTH_SOCK",
timestamp_timeout=0

Runas and Command-specific defaults for mwlucas:

User mwlucas may run the following commands on this host:
(ALL) ALL
(ALL) ALL

Now get rid of your SSH agent and try again.

$ unsetenv SSH_AUTH_SOCK
$ sudo -l
Sorry, try again.
Sorry, try again.
Sorry, try again.
sudo: 3 incorrect password attempts

The interesting thing here is that while you’re asked for a password, you never get a chance to enter one. Sudo immediately rejects you three times. Your average script kiddie will have a screaming seizure of frustration.

The downside to this setup is that you cannot use passwords for sudo on the console. You must become root if you’re sitting in front of the machine. I’m sure there’s a way around this, but I’m insufficiently clever to come up with it.

Using the SSH agent for sudo authentication changes your security profile. All of the arguments against using SSH agents are still valid. But if you’ve made the choice to use an SSH agent, why not use it to the fullest? And as this is built on PAM, any program built with PAM can use the SSH agent for authentication.

Moving Static Sites from Apache to nginx

My more complex Web sites run atop WordPress on Apache and MySQL. Every so often, Apache devours all available memory and the server becomes very very slow. I must log in, kill Apache, and restart it. The more moving parts something has, the harder it is to debug. Apache, with all its modules, has a lot of moving parts.

After six months of intermittent debugging, I decided that with the new hardware I would switch Web server software, and settled on nginx. I’d like to switch to Postgres as well, but WordPress’s official release doesn’t yet support Postgres. WordPress seems to be the best of the available evils — er, Web site design tools. The new server runs FreeBSD 9/i386 running on VMWare ESXi. According to the documentations I’ve dug up, it should all Just Work.

Before making this kind of switch, check the nginx module comparison page. Look for the Apache modules you use, and see if they have an nginx equivalent. I know that nginx doesn’t use .htaccess for password protection; I must put my password protection rules directly in the nginx configuration. Also, nginx doesn’t support anything like the mod_security application firewall. I’ll have to find another way to deal with referrer spam, but at least the site will be up more consistently.

To start, I’m moving my static Web sites to the new server. (I’ll cover the WordPress parts in later posts.) I expect to get all of the functionality out of nginx that I have on Apache.

For many years, blackhelicopters.org was my main Web site. It’s now demoted to test status. Here’s the Apache 2.2 configuration for it.

<VirtualHost *:80>
    ServerAdmin webmaster@blackhelicopters.org
    DocumentRoot /usr/local/www/data/bh
    ServerName blackhelicopters.org
    ServerAlias www.blackhelicopters.org
    ErrorDocument 404 /index.html
    ErrorLog "|/usr/local/sbin/rotatelogs /var/log/bh/bh_error_log.%Y-%m-%d-%H_%M_%S 86400 -300"
    CustomLog "|/usr/local/sbin/rotatelogs /var/log/bh/bh_spam_log.%Y-%m-%d-%H_%M_%S 86400 -300" combined env=spam
    CustomLog "|/usr/local/sbin/rotatelogs /var/log/bh/bh_access_log.%Y-%m-%d-%H_%M_%S 86400 -300" combined env=!spam
Alias /awstatclasses "/usr/local/www/awstats/classes/"
Alias /awstatscss "/usr/local/www/awstats/css/"
Alias /awstatsicons "/usr/local/www/awstats/icons/"
ScriptAlias /awstats/ "/usr/local/www/awstats/cgi-bin/"
<Directory "/usr/local/www/awstats/">
    Options None
    AllowOverride AuthConfig
    Order allow,deny
    Allow from all
</Directory>
</VirtualHost>

/usr/local/etc/nginx/nginx.conf is a sparse, C-style hierarchical configuration file. It’s laid out basically like this:

general nginx settings: pid file, user, etc.
http {
    various web-server-wide settings; log formats, include files, etc.
    server {
        virtual server 1 config here
    }
    server {
        virtual server 2 config here
    }
}

The first thing I need to change is the nginx error log. I rotate my web logs daily, and retain them indefinitely, in a file named by date. In Apache, I achieve this with rotatelogs(8), a program shipped with Apache. nginx doesn’t have this functionality; I must rotate my logs with an external script.

In the http section of the configuration file, I tell nginx where to put the main server logs.

http {
...
error_log /var/log/nginx/nginx-error.log;
access_log /var/log/nginx/nginx-access.log;

Define a virtual server and include the log statements:

http {
...
    server {
        server_name blackhelicopters.org www.blackhelicopters.org;
        access_log /var/log/bh/bh-access.log;
        error_log /var/log/bh/bh-error.log;
        root      /var/www/bh/;
    }
}

That brings up the basic site and its logs. I don’t need to worry about the referral spam log, as I cannot separate it out. nginx doesn’t need ServerAlias entries; just list multiple server names.

To test the basic site, make an /etc/hosts entry on your desktop pointing the site to the new IP address, like so:

139.171.202.40 www.blackhelicopters.org

You desktop Web browser should use /etc/hosts over the DNS entry for that host, letting you call up the test site in your Web browser. Verify the site comes up, and that nginx is actually serving your content. Verify that the site’s access log contains your hits.

To rotate these logs regularly, create a script /usr/local/scripts/nginx-logrotate.sh.

#!/bin/sh

DATE=`date +%Y%m%d`

#main server
mv /var/log/nginx/nginx-error.log /var/log/nginx/nginx-error_$DATE.log
mv /var/log/nginx/nginx-access.log /var/log/nginx/nginx-access_$DATE.log

#bh.org
mv /var/log/bh/bh-error.log /var/log/bh/bh-error_$DATE.log
mv /var/log/bh/bh-access.log /var/log/bh/bh-access_$DATE.log

killall -s USR1 nginx

Run at 11:59 each night via cron(8).

59 23 * * * /usr/local/scripts/nginx-logrotate.sh

This won’t behave exactly like Apache’s logrotate. The current log file won’t have the date in its name. There will probably be some traffic between 11:59 PM and the start of the new day at 12:00AM. But it’s close enough for my purposes.

I must add entries for every site whose logs I want to rotate.

Now there’s the aliases. I don’t have awstats running on this new machine yet, but I want the Web server set up to support these aliases for later. Besides, you probably have aliases of your own you’d like to put in place. Define an alias within nginx.conf like so:

location ^~/awstatsclasses {
    alias /usr/local/www/awstats/classes/;
}
location ^~/awstatscss {
    alias /usr/local/www/awstats/css/;
}
location ^~/awstatsicons {
    alias /usr/local/www/awstats/icons/;
}

Finally, I need my home directory’s public_html available as http://www.blackhelicopters.org/~mwlucas/. This doesn’t update, but people link here. The following snippet uses nginx’s regex functionality to simulate Apache’s mod_userdir.

location ~ ^/~(.+?)(/.*)?$ {
    alias /home/$1/public_html$2;
    index  index.html index.htm;
    autoindex on;
}

For most sites, I would define a useful error page. The purpose of this site is to say “don’t look here any more, look at the new Web site,” so pointing 404s to the index page is reasonable. Defining an error page like so:

error_page 404 /index.html;

The configuration for this entire site accumulates to:

server {
    server_name blackhelicopters.org www.blackhelicopters.org;
    access_log /var/log/bh/bh-access.log;
    error_log /var/log/bh/bh-error.log;
    root      /var/www/bh/;
    error_page 404 /index.html;
    location ^~/awstatsclasses {
        alias /usr/local/www/awstats/classes/;
    }
    location ^~/awstatscss {
        alias /usr/local/www/awstats/css/;
    }
    location ^~/awstatsicons {
        alias /usr/local/www/awstats/icons/;
    }
    location ~ ^/~(.+?)(/.*)?$ {
        alias /home/$1/public_html$2;
        index  index.html index.htm;
        autoindex on;
    }
}

While I’m happy with nginx performance so far, I’m only running a couple of static sites on it. The real test will start once I use dynamic content.