Sunday, January 08, 2017

Freenas: Migrating from VirtualBox to Bhyve

I, like many people, use the VirtualBox template jail on Freenas.  I've been using this for about a year to run Crashplan.  It's generally worked good, but the last couple of 9.10 maintenance updates have caused some problems (ex: The virtual machine 'xxxx' has terminated unexpectedly during startup with exit code 1).  See here for more on how 9.10.2 broke the VirtualBox template.

From what I've been able to read on the Freenas forums, this isn't going to get fixed and you have two options.  Don't upgrade and keep using VirtualBox or upgrade and use Bhyve.  Since I only have one VM and it's pretty simple, I opted for the latter solution.  This post is going to cover the steps that I had to do to convert from using VirtualBox to using Bhyve and iohyve without reinstalling the VM OS.

Convert the Image

The first step is to convert your old VirtualBox image to a raw disk image.  There's no direct way to import the VDI or directly use it with iohyve, but you can convert the VDI to a raw disk image and we can use that.  To convert the disk, do the following.
  1. SSH to your VirtualBox jail.
  2. Install the qemu-devel pkg with the command `pkg install qemu-devel`.
  3. Run `qemu-img convert -f vdi -O raw /path/to/virtualbox/vm.vdi /path/to/raw/image.img`
That's it.  You now have a raw image file of your old VirtualBox VM.

Setup Iohyve

There is some setup that's needed before you can use iohyve on your Freenas.  This only needs to be done once, so if you've used iohyve before on your Freenas you can skip these steps.
  1. Initialize iohyve by running `iohyve setup pool=main kmod=1 net=igb0`.  Note that you need to replace `main` with the name of your ZFS pool and `igb0` with the name of your network device.
  2. Run `ln -s /mnt/iohyve /iohyve`.  I don't know exactly why you need this symlink, but just make sure the symlink exists so that you comply with the requirement from the iohyve README.
Create a New VM

The next step is to create our new VM using iohyve.  If you're not familiar with it, iohyve is a convenient wrapper script around Bhyve which makes managing your VMs easier.  In addition to it being convenient, it's also installed out of the box on Freenas 9.10.2, which makes it a logical choice to use.

To create a new VM with iohyve, run the following commands:
  1. Create the VM, run `iohyve create crashplan 8G`.  The `8G` should be the maximum size of your disk.
  2. Configure the VM, run `iohyve set crashplan loader=grub-bhyve os=ubuntu ram=512M cpu=1`.  We set `loader=grub-bhyve` because this is a Linux VM being booted by Grub.  Set the OS to whatever you're using, my VM is using Ubuntu (if you're using LVM with Debian or Ubuntu, use `d8lvm` instead).  Then set the RAM and CPU limits, I picked 512M of RAM and one VCPU.
  3. You can now run `iohyve getall crashplan` to confirm all the settings of your VM.
Your VM is now created.  At this point, you would normally proceed to install an OS, however we've done that already in VirtualBox and so we don't want to do it again.

Import the Raw Image

The next step is to import the raw image that we exported previously.  This is simply done with `dd`.  When using iohyve, you'll see that you have a ZFS dataset for each VM.  For example, my ZFS pool is named `main` and my VM is named `crashplan`, so I have a disk at `main/iohyve/crashplan`.  In addition to that, there is a dataset for the disk.  In my case there's only one disk, so it's `disk0`.  To import the disk, we just need to `dd` the raw disk image to `disk0`.  That can be done with the following command.
dd if=/mnt/main/jails/virtualbox_jail/home/vbox/VirtualBox\ VMs/CrashPl
anBackup/CrashPlanBackup.img of=/dev/zvol/main/iohyve/crashplan/disk0
Make sure you replace `if=...` with the path to your raw image file and replace `of=...` to the path to your iohyve VM disk.  Once `dd` finishes, the VM should be ready to run.

Run Your VM

To start your VM run `iohyve start crashplan`.  Then run `iohyve console crashplan` to connect the console to the VM.  The console will display the output from the VM as it starts up.  When done you'll even be able to login to the VM through the console, but you'll probably want to connect via SSH instead.

Start VM at Boot

At this point your VM should be up and running, however if you were to restart your Freenas the VM would not be running on restart.  Fortunately it's pretty easy to make the VM start at boot.  The following instructions will make your VM start at boot.

  • In the Freenas UI, go to System -> Tunables.
  • Add a tunable.  Variable is `iohyve_enable`, value is `YES`, type is `rc.conf`. Make sure enabled is checked.
  • Add a tunable.  Variable is `iohyve_flags`, value is `kmod=1 net=igb0` (where `igb0` is the name of your network device), type is `rc.conf`.  Also make sure enabled is checked.
These settings will enable iohyve and get it started.  From there, we need to tell iohyve to start our VM.  That can be done with the following command.

iohyve set crashplan boot=1
As with all the commands I've listed, replace `crashplan` with the name of your VM.

Clean Up

While not strictly necessary, you can now clean up the old VirtualBox template jail.  Simply delete the jail a that should be it.

Summary

With any luck you should now have your VirtualBox VM functional and running on Bhyve instead.  More work may be necessary if you've got a more complicated VirtualBox VM.  For example, if you have multiple disks, multiple NICs, a GUI or were using VirtualBox specific features like shared folders.  In that case, you may want to stay on VirtualBox until official support for Bhyve is ready in Freenas 10.

Wednesday, July 15, 2015

2015 MacAdmins Conference at Penn State


The 2015 MacAdmins Conference at Penn State was last week.  It was a fantastic conference and I had the good fortune to be able to both attend and speak at the event.  If you were in one of my sessions and are looking for slides or code samples see the info below.  I believe video should be available shortly and as soon as it is I'll update this post with the relevant links.

Basic App Development with Swift



The Sys Admin's Guide to Python

If you missed the conference this year, put next year's date on your calendar now.  The 2016 MacAdmins Conference at Penn State is scheduled for June 27-30, 2016.  See you then!

Friday, March 13, 2015

A recent encounter with a customer resulted in a couple good questions regarding the workflow that one would use to deploy apps to Cloud Foundry in order to try for a 100% up-time.  Based on that, I would share the questions and answers here.

Question #1 - How do you push updates to your application without downtime?

Currently when you push, or restart for that matter, an application running on Cloud Foundry, the change is applied in a series of steps that go roughly like this.
  • New app bits are uploaded
  • The current version of the app is stopped
  • Staging for the new app occurs (i.e. the build pack runs)
  • The new app is started
What’s important to understand about this process is that when the app is stopped all instances of your application are stopped. Thus there will be some small amount of downtime, while your new app stages and is started.
The typical suggestion for working around this is to do what are called blue / green deployments, which work by running both the current and new version of application at the same time. Since both apps are running, you can switch to the new app in a controlled fashion by simply manipulating the routes, something that happens instantly and does not require downtime.

Question #2 - How do you push updates to your application if it’s not taking web requests but still needs to maintain high availability?

If you have an application that is not taking HTTP requests, like a background worker, the typical blue / green deployment scenario may or may not work for you. If you’re running a background worker and need to keep it highly available, here are some things to consider.
  1. If you have a background worker style task, it may be as simple as starting a second instance of the application that is running the new code and then stopping the old instance. The key to making this work on CF is to use different application names (both bound to the same service, if the worker is using services). This will enable both to run at the same time and allow you to shutdown the old worker instance when you’re satisfied that the new code is working properly.

    Before doing this though, please keep in mind that there will be a window of time where there are two versions of your application running. This means that before adopting this approach, you should consider what will happen if there are two versions of your worker running at the same time. Will they play nice together or will they compete for the work, and will they both be compatible (i.e. did the database schema change, did message formats change, etc..).
  2. Another solution to this problem is to simply ignore it. Depending on the architecture of your application you may be able to just push your new changes and ignore the fact that the application will be down for a small window of time. This will generally be the case for background worker tasks that are simply pulling jobs from a queue or database. This flexibility comes from the fact that by their nature the database or queue will hold the jobs while your application is not running. Given this, all you need to do is push the new change and wait for the app to catch up on it’s work.

    Before going with this approach, there are some important things that you should consider. First, you should have a good understanding of how long it will take for your new version of the application to stage, start up and begin doing work. This is critical and leads us to the second point. You need to have a good estimate as to how much work will be queued up while the application is restarting and if your service is capable of storing that much data. This is key to not losing any work while the new version of your application is starting up.

    Lastly, you want to consider how long it will take your application to recover from being down.  While the app is down, jobs will be queuing up on the database or messaging system. You'll want to consider how long it will take for the new application to catch up with the queue jobs.  If the time it takes for you to recover from being down is too long you may want to look at temporarily increasing the number of instances of your application.  If your application supports this, it will allow you to catch up more quickly.  Then after things are caught up, you can scale down with cf scale to your usual level.
  3. With a blue / green deployment you have the luxury of running two versions of the application at once, but your end-users are only using one at any given time. This is accomplished by manipulating the application mappings such that your users get directed to the version that you want them to see. With a background task, there is no such external control or switch. As soon as you start the second version of your application, it’ll begin working.

    One way around the lack of an external switch would be to build an internal one into your application. This could be something like an “admin” console (or REST endpoint) that allows you to enable or disable processing, flipping a record in a database or even sending a special message to control the application.   Exactly how it’s implemented will largely depend on the application and what fits best for it’s workflow, but in the end what you have is an internal switch to turn on or off processing for the application.

    This switch can then be used in conjunction with the first or second approaches listed above to give some additional control over the application and your deployment workflow.

Tuesday, January 27, 2015

As I mentioned in a previous post, I was lucky enough to be selected to speak at SpringOne2GX 2014. I co-presented at the event with Stuart Williams on our talk, Fastest Servlets in the West talk. 

As you might expect, the session talked about performance of Servlet based applications running in Apache Tomcat. We'll also talked about load testing, tuning the container and presented some tips for squeezing every bit of performance out of your app when it's running on Apache Tomcat.

I'm happy to say that our presentation went quite well that the recording is now up on InfoQ.  If you're interested in watching the video, here's the link.