pxelinux.cfg/* versus RCS

I’m a fan of version control in systems administration. If you don’t have a central VCS for your server configuration files, you can always use RCS. I habitually add #$Id$ at the top of configuration files, so I can easily see who touched this file last and when.

On an unrelated note, I’m upgrading my virtualization cluster to Ubuntu 10.10. The worker nodes run diskless. Each diskless node reads a configuration file over TFTP. Mine looked like the following:

#$Id$

LABEL linux
KERNEL vmlinuz-2.6.35-27-server
APPEND root=/dev/nfs initrd=initrd.img-2.6.35-27-server-pxe nfsroot=192.0.2.2:/data1/imagine,noacl ip=dhcp rw
TIMEOUT 0

This has worked fine for a year or so now, with me changing the kernel and initrd versions as I upgraded. With the Ubuntu 10.10 update, however, some pieces of hardware wouldn’t reboot. Most booted fine, but a few didn’t come back up again.

This is notably annoying because the hardware is in a remote datacenter. Driving out to view the console messages burns an hour and, more annoyingly, requires that I stir my lazy carcass out of my house. I have a serial console on one of the machines, but not on the affected one. Fortunately, I do have remote power, and I can make changes on the diskless filesystem.

Packet sniffing revealed that the machine successfully made a TFTP request, then just… stopped. This exact same configuration and filesystem worked on other machines, however. Except that the affected machines all had #$Id$ on the first line of their pxelinux.cfg file, and machines that booted successfully didn’t.

That shouldn’t matter. Really, it shouldn’t. pxelinux.cfg files accept comments. But I removed the tag, making the first line the LABEL statement, and power cycled the machine. And it came up perfectly.

Apparently this particular rev of Linux PXE is incompatible with version control ID tags. Oh joy, oh rapture!

blather versus undeadly.org

So how does the traffic I get here compare to an established Web site, like the OpenBSD aggregator undeadly.org? Undeadly linked to my OpenBSD story

incoming!

Can you guess when?

No, they weren’t the only ones. But 6 of my top 10 referring URLs were in undeadly.org. The lesson is, do not feed the puffer fish. They will swarm and eat you like the tender tasty morsel you are. They even crashed my helpless little server. (Admittedly, I’d done terrible things to the server configuration, including twaddling knobs labeled DO NOT TOUCH, but that’s not the point.)

This is not the first spike I’ve gotten; my BSD/wikileaks article dang near went viral. So, another lesson I might learn is: if you write something that’s honestly interesting, people will find you. You really don’t have to break your back promoting it. Lots of writers babble about self-promotion, but most of it is an example of “solving the wrong problem.” Rather than pimping what you’ve written, make your work more interesting.

But that’s too positive for me. I think I’d rather just fear the Puffy.

diskless ubuntu serial console

I’m using Ubuntu servers with qemu-kvm as a virtualization solution. The software included in 10.04LTS includes a variety of annoyances, such as broken PXE, odd bridge behavior, and “general weirdness.” Although 10.10 is not supported in the long term, I decided to try it.

The good news is, the 10.10 virtualization stack works much better. The bad news is, 10.10 didn’t want to run on my diskless hardware. Boot attempts all died with many lines of:

ipconfig: no devices to configure

and a message about killing init. The server was quite explicit that it was dead, and how it was dying, but didn’t leave any clues as to what had killed it. I’m sure that the console showed useful error messages, but they had scrolled off the top of the screen.

The manual says that if you hit shift-PageUp, Ubuntu should page up through the console messages. That should be amended to read “unless init is dead and your keyboard LEDs are blinking slowly but steadily.”

The only way to resolve this problem is to see the error messages that say why the machine crashed. So, a serial console. I want PXE messages, initrd messages, and kernel boot messages sent to serial console. These are all controlled by the /tftpboot/pxelinux.cfg/machine file. The actual file name is the MAC address of the booting NIC.

If you want to get messages from the pxe and initrd boot stages, the pxelinux.cfg file’s first line must include the SERIAL statement. If you want to get console messages from the booting kernel and/or log into the running system over the serial console, you must append a serial statement to the kernel boot command. The end result for a serial console looks like this:

SERIAL 0 115200
LABEL linux
KERNEL vmlinuz-2.6.35-27-server
APPEND root=/dev/nfs initrd=initrd.img-2.6.35-27-server-pxe nfsroot=192.0.2.1:/nfsroot ip=dhcp rw console=tty0 console=ttyS0,115200n8
TIMEOUT 0

The Web site will probably wrap the APPEND statement around, but that line and everything beneath it down to TIMEOUT is a single line.

If you want a serial login in multiuser mode, you need to create a script to activate the terminal. Here’s the Ubuntu default terminal script:
/etc/init/ttyS0.conf

# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]

respawn
exec /sbin/getty -L 115200 ttyS0 vt102

The next time you reboot your diskless box, you should have a full serial console.

Some time soon, more on the actual error and how I fixed it.

my OpenBSD story

The folks at undeadly.org have started posting “how I discovered OpenBSD” stories. This isn’t a story of how I discovered OpenBSD, but rather why I like it. Before you ask, I don’t have similar stories about any other operating system, not even any other BSDs. I was guided to FreeBSD in 1995, and I discovered NetBSD on my own shortly after. (An earlier version of this was previously published in a small promo pamphlet handed out at a tech conference years ago.)

Back around 2000, my employer’s main business was designing Web applications, but once those applications were built our clients would turn around and ask “Where should we host this?” That’s where I came in, building and running a small but professional-grade data center for custom applications.

As with any new business, our hosting operation had to make the most of existing resources. Hardware was strictly limited to cast-off hardware from the web developers, and software had to be free. The only major expense was a big-name commercial firewall, purchased for marketing reasons rather than technical ones. With a whole mess of open-source software, we built a reliable network management system that provided the clients with a more insight into their equipment than their in-house people could offer. The clients paid for their own hardware, and so had fancy high-end rackmount servers with their chosen applications, platforms, and operating systems. As the business grew we upgraded the hardware – disk drives less than five years old are nice – but saw no need to replace the software.

One Monday morning, a customer that had expected to use very little bandwidth found that they had sufficient requests to devour twice the bandwidth we had for the entire datacenter. This affected every customer. If your $9.95/month web page is slow you have little to complain about, but if your multiple-thousands-of-dollars-a-month Web application is slow you pick up the phone and scream until the problem stops.

To make matters worse, my grandmother had died only a couple days before. Visitation was on Tuesday, the funeral Wednesday morning. I handed the problem to a minion and said “Here, do something about this.” I knew bandwidth could be managed at many points: the Web servers themselves, the load balancer in front of them, the commercial firewall, and even the router all claimed to have traffic management capacity.

Tuesday after visitation I found my cellphone full of messages. The version of Internet Information Server could manage bandwidth — in eight megabyte increments, and only if the content was static HTML and JPEG files. With several Web servers behind the load balancer, that fell somewhere between useless and laughable. The load balancer did support traffic shaping, if we bought the new feature set. If we plopped down a credit card number, we could have it installed by next Sunday. Our big-name commercial firewall also had traffic shaping features available, if we upgraded our service level and paid an additional (and quite hefty) fee for the feature set. That left the router, which I had previously investigated and found would support traffic shaping with only an IOS upgrade.

I was on the phone until midnight Tuesday night, making arrangements to do an emergency OS upgrade on the router on Wednesday night. I had planned to go to the funeral Wednesday morning, give the eulogy, go home and take a nap, and arrive at work at midnight ready to rock. The funeral was more dramatic than I had expected and I showed up at work at midnight sleepless, bleary-eyed, and upright only courtesy of the twin blessings of caffeine and adrenaline. In my email, I found a note that several big clients had threatened to leave unless the problem were resolved Thursday morning. If I hadn’t already been stressed out, the prospect of choosing a minion to lay off would have done the trick. (Before any of those minions start to think I care about them personally: I work hard training minions, and swinging the Club of Correction makes my arms sore. Eventually. I don’t like to replace them.)

Still, only a simple router flash upgrade and some basic configuration stood between me and relief. What could possibly go wrong?

The upgrade went smoothly, but the router behaved oddly when I enabled traffic shaping. Over the next few hours, I discovered that the router didn’t have enough memory to simultaneously support all of our BGP feeds and the traffic shaping functionality. Worse, this router wouldn’t accept more memory. At about six in the morning, I finally got an admission from the router vendor that they could not help me.

I hung up the phone. The first client who had threatened departure would be checking in at seven thirty AM. I had slept four hours of the last forty-eight, and had spent most of that time under fiendish levels of emotional stress. I had already emptied my stash of quarters for the soda machine, and had pillaged a co-worker’s desk for his. The caffeine and adrenaline that had gotten me to the office had long since worn off, and further doses of each merely slowed my collapse. We had support contracts on every piece of equipment, and they were all useless. All the hours of work my team and I had put in left me with absolutely nothing.

I made myself sit still for two minutes simply focusing on breathing, making my head stop sliding around loose on my shoulders, and ignoring the loud ticking clock. What could be done in ninety minutes — now only eighty-eight?

I really had one only option. If it didn’t work, I would either lay someone off or file for unemployment myself.

6:05 AM. I started downloading the OpenBSD install floppy image then grabbed a spare desktop machine, selecting it from amongst many similar machines by virtue of it being on top of the pile. The next few minutes I alternated between hitting the few required installation commands and dismantling every unused machine unlucky enough to be in reach to find two decent network cards.

By 6:33 AM I had two Intel EtherExpress cards in my hands and a virgin OpenBSD system. I logged in long enough to shut the system down so I could wrench the case off, slam the cards into place, and boot again. Even early versions of PF included all sorts of nifty filtering abilities, all of which I ignored in favor of the newly-integrated traffic-shaping functions. By 6:37 AM I was wheeling a cart with a monitor, keyboard, and my new traffic shaper over to the rack.

Then things got hard. I didn’t have a spare switch that could handle our Internet bandwidth. The router rack was jammed to overflowing, leaving me no place to put the new shaper. I lost almost half an hour finding a crossover cable, and when I discovered one it was only two feet long. The router, of course, was mounted in the top of the rack. About 7:10 AM, I discovered that if I put the desktop PC on end, balanced it on an empty shipping box, and put the box on the mail cart, the cable just reached the router. I stacked everything so it would reach and began re-wiring the network and reconfiguring subnets.

I vaguely recall my manager coming in about 7:15 AM, asking with taut calmness if he could help. If I remember correctly, as I typed madly at the router console I said “Yes. Go away.”

At 7:28 AM we had an OpenBSD traffic shaper between the hosting area and our router. All the client applications were reachable from the Internet. I collapsed in my chair and stared blankly at the wall.

While everything seemed to work, the proof would be in what happened as our offending site started its daily business. I watched with growing tension as that client’s network traffic climbed towards the red line that indicated trouble. The traffic grew to just short of the danger line — and flatlined. Other clients called, happy that their service was restored to its usual quality. (One complained that his site was still slow, but it turned out that bandwidth problems had masked an application problem.) The problem client complained that their web site now ran even slower than before, to which we offered to purchase more bandwidth if they’d agree to buy it.

I taped a note to the shipping box that said “Touch this and I will kill you,” staggered to my car, and by some miracle got home.

Shortly afterwards, I had two new routers and new DS3s. The racks were again clean. The decrepit desktop machine was replaced by two rack-mount OpenBSD boxes in a live-failover configuration, protecting our big-name commercial firewall as well as shaping traffic. And I now keep a crossover cables in a variety of lengths.

Should we have had traffic shaping in place before selling service? Absolutely. As with any startup, though, our hands were full fixing the agonies of the moment and less on the future.

If I had started with OpenBSD, I would have had a much better night.

(Want more OpenBSD? Check out my book Absolute OpenBSD.)

DNS DDos of the Day

My phone got a call recently from a systems administrator whose network was under attack. I was busy getting my twice-weekly dose of humility, but a couple hours later, my phone delivered the message.

The attacker was flooding their primary DNS server with requests for isc.org. This is a not-uncommon attack. As DDos attacks go, it’s not terribly effective; it can overwhelm the DNS server’s resources, but doesn’t utterly destroy the victim’s network. You can easily defend against this by controlling which hosts can perform recursive lookups on your server.

This particular sysadmin was running a DNS server that didn’t permit access control for recursive lookups. It ran fine for years, until someone wanted to attack it, much as your house doesn’t need a lock on the door until someone tries to break in. We discussed various ways he could blunt the attack, and a strategy for moving to a public-facing DNS server that supported access control lists.

I could start with “here’s a nickel, kid, go buy a better operating system.” But that’s not exactly helpful. A lot of Unix sysadmins are just as guilty of offering insecure services on their networks, thinking that nobody is going to attack their petty little operation. But you never know when you’ll anger some dweeb who cannot express their emotions in any way other than clicking a few buttons and giggling. This particular sysadmin had run his server for years without difficulty. But you only need to lock your car when someone tries to steal it.

If you’re running a DNS server, use one that supports ACLs. I’ve written about unbound as a recursive DNS server. Or, if you’re running BIND, you can use an ACL:

options {
...
allow-recursion {our_stuff; };
};

acl "our_stuff" {
192.0.2.0/24;
};

Poof! Recursion attacks are stopped.

Nobody wants to attack you? Nobody will EVER want to attack you? You are such an awesome human being that you will never accidentally annoy someone? Fine. I believe you. Wholeheartedly But did you know that open DNS resolvers can be used to amplify DNS-based DDos attacks? And these attacks are growing more common? And that a large number of Internet appliances have open resolvers? Do you issue those devices to your clients? Open resolvers are the new open mail relays.

Today is a good day to check your network for open resolvers. Or you can use a free shell account to run dig against your servers. Check your appliances, too.

This principle applies to services other than SSH, of course. Use keys to authenticate via SSH, or at least restrict the IP addresses that can log in via passwords. Apply your patches regularly. Think about what you’d do if you were under attack, and the points on your network where you could defend. You probably already know about some security holes on your network. Quit playing Angry Birds and go fix them.

But if you run an open resolver, you are ruining another sysadmin’s weekend.

publishers versus self-publishing

People keep asking me why I use a publisher when self-publishing has become more and more possible over the last few years. Today, 38% of Amazon’s top 100 titles are self-published. Authors with a long track record in publishing, like Bob Mayer and Joe Konrath, extol the advantages of self-publishing your work rather than going through a publisher. Dean Wesley Smith and Kristine Kathryn Rusch, authors with decades of respectable mainstream publishing behind them, make solid business cases for skipping publishers and selling directly to your audience.

These authors write fiction. How well do their arguments apply to non-fiction? Well enough, if you want to do the work or pay someone to do the work. Here’s what you must do to produce a professional-quality nonfiction book. (If you want to produce an amateur, feeble book, you can skip any or all of these.)

  • Tech review: Someone who knows the subject has to review your work. Even if you crowdsource an initial tech review, as I’ve done for my BSD books, you still need an acknowledged subject matter expert to double-check your work. Your technical reviewer will expect paying. Most publishers pay a couple grand for tech review, or offer a cut of the royalties.
  • Editing: An editor is not a proofreader. An editor helps transform your manuscript from the disjointed babblings of a subject matter expert into something that can be understood by your reader. You can expect to pay $1-2/page for a decent technical editor.
  • Copyediting/proofreader: This is your proofreader. $1-2/page, again.
  • Layout: A good book is invisible. The layout disappears from the reader’s perceptions, leaving only a stream of words flowing from the book in the reader’s brain. I have never met a technical person who really had this skill. There are people who will format your manuscript for publishing on Kindle, Smashwords, and other ebook retailers, as well as paper formats for CreateSpace or Lightning Source. This runs anywhere from $100 for a novel up to $500 or more for complicated technical documents.
  • Publicity: Different publishers offer different levels of publicity. My publisher, the inimitable No Starch Press, publicizes every book heavily, in every appropriate channel. Other publishers just put new books in the catalog and let the author hope. Publicity can cost as much as you want to spend.
  • Graphics: Most authors can’t draw, even if they think they can. Publishers usually have internal artists recreate author art. You need to make sure that your own art is adequate.
  • Management: Your book has a project manager who keeps track of all the disparate threads of producing your book. If you self-publish, that project manager is you.

    Overall, you can expect to spend a few thousand dollars self-publishing a professional-quality book, and a fair amount of extra time. Miss any step, handle any step less than perfectly, and your book will suffer.

    What do you miss out on when you self-publish?

  • Translations: Advances for foreign language rights range from $1000-$2000, plus royalties if and when the advance is earned out. You will not be able to pursue translation rights — you don’t have the contacts or the contract expertise.
  • Bookstores: You will not see your self-published book at Barnes & Noble. End of discussion.
  • Competence: Nonfiction publishers are experts at helping non-authors produce good, readable books. A good publisher will help you make your book the best it can be. If you’re not a writer, anything you self-publish will read poorly, no matter how much outside help you have. If you think you’re a writer, but you’ve never worked with a publisher, you have a lot to learn.
  • Cameraderie: You’re a team with your publisher. They will work with you. You’ll make friends. Everybody wants the book to succeed, and believes that the book can succeed. It’s hard to put a price tag on that.

    Nonfiction authors have some potential advantages, however. If you have a truly unique book, with no competition, you can do well self-publishing. If you want to compete in an existing, well-established topic, however, you’ll have a much harder slog. I wouldn’t recommend self-publishing a FreeBSD book, for example.

    How do these affect me?

    I want an editor, tech editor, and copyeditor who are interested in producing the best book possible. An editor I hire is not going to tell me “Wow, this book is horrible and pointless.” An editor who works for my publisher will voice his concerns to the publisher, and the publisher will intervene as necessary. I can honestly say that none of my publishers have ever had to have this meeting with me, but I want them to have the freedom to do so.

    Publicity? I have enough trouble with the little publicity I do now. I resisted blogging, Facebook, and Twitter for years. The less I talk to people, the more people like me. (It’s not that I’m an obnoxious person, but a little bit of me goes a long way.) An outside publicity person is an excellent idea. I try to give my publisher’s publicity person everything he asks for, follow his suggestions, and get out of his way.

    Bookstores: I don’t see my books in stores in Detroit, but I know that some people buy my books in bookstores. It seems that Amazon owns my publishing career.

    Graphics: I am an author, not an artist. Producing the graphics for PGP & GPG took as long as writing the manuscript itself. I need outside help with art.

    I don’t want to do all this for my technology books. The tech publishing industry is in much better shape than the fiction industry, and I’m confident that I will be able to find a home for my nonfiction. I might self-publish my fiction some day, just to escape the submission treadmill. But I haven’t given up on that mainstream success… yet.

  • NYCBSDCon Video

    The video of my NYCBSDCon 2010 talk, BSD Needs Books, is now available at http://blip.tv/file/4844882. At the moment, it’s the top link on BSD TV.

    This is the first time I’ve seen my own presentation, at any conference. I’ve always suspected that I look daft in front of an audience. It turns out that the slim chance I was wrong was a nice thing to have.

    OpenLDAP search filters

    I use LDAP authentication on several Web servers. For the first time, I have a Web application that I want to open to customers as well as staff. Usually, I just put the users into a group. Apache validates the password against LDAP and checks for group membership, and either accepts or rejects the request. The relevant Apache configuration looks like this:

    AuthLDAPURL “ldap://ldap1.domain.com/ou=people,dc=domain,dc=com” STARTTLS
    AuthLDAPGroupAttribute memberUid
    require ldap-group cn=groupname,ou=groups,dc=domain,dc=com

    Apache requires that I specify where to look for accounts, as shown in bold above. My customers are in a different OU than my coworkers. (It would have made more sense to name the “people” container “staff,” but I didn’t realize that at the time.) Apache will accept a filter in AuthLDAPURL, letting you check in multiple groups. I’ve never taken the time to understand LDAP filters, so I guess I better start now. I’ll write my first filters for ldapsearch(1), and then carry them over to Apache.

    Normally, I run ldapsearch like so:

    # ldapsearch -WxZD "cn=manager,dc=domain,dc=com"
    Enter LDAP Password:

    -W tells ldapsearch to ask for a password, -x sets simple auth, -Z toggles startTLS, and -D indicates a bind DN follows. While I have an inherent dislike of typing a password on the command line, I’m going to run many LDAP searches in quick succession on a test machine. My test machine doesn’t use the same password as my production environment, so I’m willing to make an exception for convenience. Drop the -W, and add the password with -w. Specify the password in quotes to escape symbols and such.

    # ldapsearch -xZD "cn=manager,dc=domain,dc=com" -w "password"

    You should get a dump of your LDAP directory.

    Now to build up a filter iteratively, figuring out how they work as we go. ldapsearch expects the filter to be the last item on the command line. Put it in quotes, to escape special characters.

    # ldapsearch -xZD "cn=manager,dc=domain,dc=com" -w "password" "(uid=mwlucas)"

    This returns only my user account, as I would expect. Now let’s search for one of two accounts, joined by an OR. I’m going to stop including the entire command line, and only list the filter at the end.

    "(|(uid=mwlucas)(uid=mwlucas2))"

    The OR operator is a pipe symbol (|). It’s followed by the two possible choices, each in parenthesis. This filter matches any entry where the uid is either mwlucas or mwlucas2. I get information for two accounts back.

    Similarly, I can search for a group by CN as well as a username. I want to see everything with a UID of “mwlucas” or matching the CN “cacti”.

    "(|(uid=mwlucas)(cn=cacti))"

    Entries for my account and this group appear.

    About this time I realize that I can probably fix my Apache problem by removing the ou=people entry in AuthLDAPURL, giving me:

    AuthLDAPURL “ldap://ldap1.domain.com/dc=domain,dc=com” STARTTLS

    I try it and, yes, users from both OUs can now log in. But I'm going to learn about search filters, anyway.

    I can use two additional logical operators, AND (&) and NOT(!).

    Also, filters support wildcards. For example, here I want to see all accounts that have the initials "mwl" in them. I've created more than one test account, and want to be sure that I remember all of them.

    (uid=*mwl*)

    That generates a lot of output, though. I'm more interested in a list of UIDs. If you specify an attribute after the filter, ldapsearch will only print that attribute. Here's the whole command string for this search.

    # ldapsearch -xZD "cn=manager,dc=domain,dc=com" -w "password" "(uid=*mwl*)" uid
    ...ldap internal stuff deleted...
    # mwlucas, people, domain.com
    dn: uid=mwlucas,ou=people,dc=domain,dc=com
    uid: mwlucas

    # mwltest, people, domain.com
    dn: uid=mwltest,ou=people,dc=domain,dc=com
    uid: mwltest

    # mwlstaff, people, domain.com
    dn: uid=mwlstaff,ou=people,dc=domain,dc=com
    uid: mwlstaff

    # mwlucas2, customers, domain.com
    dn: uid=mwlucas2,ou=customers,dc=domain,dc=com
    uid: mwltest2

    That's enough filtering to make my day-to-day life easier, so I'll get back to the problem I'm really trying to solve today.

    Fail Quickly

    I’ve started the next book for No Starch Press. There’s an outline, and I’ve written both the introduction and the afterword. All that’s left is the hard stuff in between, twenty-some chapters of it.

    Where to start writing? That’s easy: First, I write the stuff that’s most likely to make the book fail.

    Every project has easy parts that are fun and go quickly. Those are the tasks you’re most familiar with, that leverage your existing skills. Then there’s the parts that require you to learn new things, or demand that you actually spend time and energy breaking them down so others can understand you. These are the parts of the project that are most likely to make the project fail. I want to get those parts over with as quickly as possible.

    If the entire book is going to collapse because four chapters are impossible to write, it’s better to know that up front than after I’ve written the eighteen easy chapters leading up to them. I’m writing about 500 words an hour on this part, where I normally write 1000 words an hour. It’s drudgery, but they’ll get done.

    I’ve seen a lot of IT projects fail by spending their initial burst of energy on the easy stuff. If you do the easy part first, the hard part gets time to grow in your mind. You’ll spend energy dreading it. Worse, the time you spend doing the easy stuff might be completely wasted — after all, if you can’t do the hard part, then you have to throw everything else away. You can always do the easy part after you succeed at the hard bit, and it’ll make the rest of the project go more quickly.

    Now if you’ll excuse me, I have to finish this section of this chapter tonight…

    Public Service Announcement on Painting Old Brick

    A modern hand scraper and wire brush can strip peeling, mildewy paint from a concrete basement wall almost easily — at least, much easier than when I was a kid and had to do the same job with a pointed stick and piece of chalk. The equipment comes with warnings in big black letters. “Wear Goggles!” “Wear Gloves!” “May Sever Fingers!” And so on. You don’t want to get a flying paint chip in your eye.

    Unfortunately, it doesn’t come with a warning that says “Keep Mouth Shut.”

    Describing the taste of a hundred-year-old mildewed paint chip as “Lovecraftian” would leave me without adequate vocabulary to describe the texture.

    The moral is: when you need to shut up and do the job, don’t forget the “shut up” part.