opennebula with one iscsi target per VM

OpenNebula users know that NFS is just too slow for virtual machine disk images.  Fiber Channel works, but is too expensive for me.  Rather than deal with disk image speed issues, I’m using NFS on ZFS for file storage and booting my systems diskless.  Diskless servers have a lot of advantages, but speed isn’t one of them.  This is fine for most applications, but a few things (databases come to mind) perform better on a speedy disk.  I want the ability to use diskless machines where appropriate, but use cheap networked disk when necessary.  Ideally, I want iSCSI on top of ZFS.  Short of ideal, I’ll take iSCSI any way I can get it.  I want the virtualization server to attach to the iSCSI target, and then offer that target to the VM as if it was a local disk.

There’s an alpha one-iSCSI-target-per-VM transfer manager driver.  It’s intended for a Linux iSCSI server, which I don’t have and don’t intend to run.  Instead, I have a stack of cheap NAS appliances.  Here’s how I got one target per VM running in my OpenNebula instance.

I create an iSCSI user on one of them, create a 40GB iSCSI volume on the NAS, and give my user access to the image.

Now verify that my Ubuntu KVM worker nodes can access this iSCSI volume.  This should create a device node in /dev/disk/by-path, giving us a unique device node for this node.  If you don’t get a device node, find out why.  All of the worker nodes should automatically login to the iSCSI disk, but must not mount the drive.  Having multiple servers attach to one drive at the same time is fine, but if multiple hosts mount the drive you will corrupt the filesystem.  Besides, the worker nodes don’t care about the files on that iSCSI disk; they should just pass it off to the underlying VM.

My OpenNebula environment is in /usr/local/one, with VM_DIR set to /usr/local/one/var.  This directory is NFS mounted on all of the worker nodes.  I create a symlink from /usr/local/one/var/disk_images/mwayne.img to /dev/disk/by-path/ip-BLAH.  (Before you ask, mwayne is the user who needs this VM.  He’s willing to be a guinea pig.  I’m willing to accept that offer.)

My diskless KVM servers share various directories between themselves: notably, I use OpenNebula’s NFS transport manager.  The following OpenNebula template defined a machine that booted off of the iSCSI drive.

NAME   = mwayne
CPU    = 0.5
MEMORY = 512
OS      = [ BOOT   = hd ]
NIC     = [ BRIDGE = "br0",
   MODEL = "e1000",
   MAC = "00:11:22:FF:FF:23" ]
DISK    = [
   source = "/usr/local/one/var/one_images/mwayne.img",
   target = "hda",
   clone  = "no"
]
GRAPHICS = [type="vnc",listen="127.0.0.1",port="-1"]

It didn’t go quite that smoothly, of course. I found errors in my OpenNebula configuration, problems with installed software, and lots of minor annoyances. When you get errors, be sure to check oned.log as well as the vm.log for the failed OpenNebula VM instance. Don’t be afraid to delete and resubmit your VM repeatedly to generate logs.

Yes, this is a very limited OpenNebula environment.  When you have a mishmash of iSCSI targets, there’s no way to automatically provision iSCSI drives.  You must add each iSCSI target to each initiator.  But if you have a small environment, it’s certainly doable.

Personally, I hope to use the alpha “iSCSI for Linux targets” driver to create a driver for using a FreeBSD file server with ZFS and istgt as a storage back end.  But that’s a project for another month.

4 Replies to “opennebula with one iscsi target per VM”

  1. Hi, next month,i’ll try to implement this iscsi script, but with a nexentastor storage. (opensolaris/zfs).
    i’had read the code, the target creation part is easy to convert. (through nexenta api).
    also,i’ll try to implement true cloning via zfs cloning.

  2. Hi,

    Someone just tweeted this article, thanks very much! I do have one question: Why do you think NFS doesn’t perform well for disk images? Protocol issues? Tuning? Linux NFS client issues?

    Thanks,

    -n

  3. I could speculate on why it’s not very fast in my environment, but it would be only speculation.

    I’m currently playing with NFSv4 to see if I can get better performance promised by the protocol. Will blog on that when I have something.

Comments are closed.