Sunday, February 17, 2019

Cloud Native Buildpacks

In the past, I've worked with buildpacks through my time using Cloud Foundry. Cloud Foundry has first class support for buildpacks, which allows you to push code and let the buildpack handle the messy parts of actually running your code. Things like installing a language runtime, installing servers, etc...

Recently the buildpacks world has expanded with the CNCF's acceptance of the Cloud Native Buildpacks project into the CNCF sandbox (sometimes called v3 buildpacks). In addition to an excellent and easily readable spec, this work brings us the `pack` CLI tool, which allows you to run Cloud Native Buildpacks on your local PC and easily deploy the output, which is an OCI image, to Docker or anywhere else you can run an OCI image.

In this post, I'm going to walk through some basics and show you how to get started with `pack`, build some image and run them.

Getting Started

To get started you need to install Docker. The Community Edition works fine. Follow the previous link to get that installed, if you don't have it already.

Then install the `pack` CLI.  You can download `pack` from its Github project here. At the time of writing, I'm using the 0.0.9 release. To download the tar or zip, extract the `pack` binary and put it somewhere on your PATH. On Mac/Linux, `/usr/local/bin` is a good place. Once installed, you should be able to run `pack version` and see `v0.0.9 (git sha: a1a1a0eef63bd09136ab76663bdbc3b0ab3a4931)`.

Hello World

To get a basic app going, we need to do one more thing first. Obtain some buildpacks to use. So run `git clone https://github.com/buildpack/samples`, which is a repo that has a couple very basic sample buildpacks.

Sidebar. At the time of writing, the sample buildpack we're using has an error with it's metadata. You may not need to do this in the future. Edit `samples/hello-world-buildpack/buildpack.toml` and put in the following:

[buildpack]
id = "io.buildpacks.samples.buildpack.hello-world"
version = "0.0.1"
name = "Hello World Buildpack"

[[stacks]]
id = "io.buildpacks.stacks.bionic"

Now that we have buildpacks, we need an app to run. We'll create that now. Run `mkdir hello-world` and then `cd hello-world`. In that folder create `app.sh` and put the following in that file.

#!/bin/bash

while [ 1 -eq 1 ]; do
  echo "Hello World!"
  sleep 5
done

Last step, run `chmod 755 app.sh` to make it executable.

At this point, we now have a buildpack to use and our application code. It's time to run `pack` and make an image.

From our application directory run `pack build --buildpack $(cd ..; pwd)/samples/hello-world-buildpack/ hello-world-app` or replace `$(cd ..; pwd)/samples/hello-world-buildpack/` with the full path to the sample repo you cloned above. This will create an image called `hello-world-app` using the `hello-world-buildpack`, which does nothing (it's a no-op). The output should look something like this.

$ pack build --buildpack $(cd ..; pwd)/samples/hello-world-buildpack/ hello-world-app
Defaulting app directory to current working directory /Users/dmikusa/Downloads/hello-world (use --path to override)
Using default builder image packs/samples:v3alpha2
Pulling builder image packs/samples:v3alpha2 (use --no-pull flag to skip this step)
Selected run image packs/run:v3alpha2 from stack io.buildpacks.stacks.bionic
Pulling run image packs/run:v3alpha2 (use --no-pull flag to skip this step)
Using cache volume pack-cache-153f385b25c48f5d30ee0544d75bee63
===> DETECTING
Using manually-provided group
[detector] 2019/02/17 21:09:40 Trying group of 1...
[detector] 2019/02/17 21:09:41 ======== Results ========
[detector] 2019/02/17 21:09:41 Hello World Buildpack: pass
===> ANALYZING
Reading information from previous image for possible re-use
[analyzer] 2019/02/17 21:09:42 WARNING: image 'hello-world-app' not found or requires authentication to access
[analyzer] 2019/02/17 21:09:42 removing cached layers for buildpack 'config' not in group
===> BUILDING
[builder] ---> Hello World buildpack
[builder]      env_dir: /platform/env
[builder]      plan_path: /tmp/plan.333599924/io.buildpacks.samples.buildpack.hello-world/plan.toml
[builder]      layers_dir: /workspace/io.buildpacks.samples.buildpack.hello-world
[builder] ---> Done
===> EXPORTING
[exporter] 2019/02/17 21:09:48 adding layer 'app' with diffID 'sha256:361cdaf2662ea41f08da0204a4c0393beb629cff85df2b3d650ed7423dc188f2'
[exporter] 2019/02/17 21:09:48 adding layer 'config' with diffID 'sha256:ab046f0bf0b24db6ae8f59e437cc570925e451cb60ae714fcb125ab4095dd9bb'
[exporter] 2019/02/17 21:09:49 adding layer 'launcher' with diffID 'sha256:d77dc7ed6207d6bb9c389aa5f087ea7fffea9238e2de84b03f8b3c1152e1e58f'
[exporter] 2019/02/17 21:09:49 setting metadata label 'io.buildpacks.lifecycle.metadata'
[exporter] 2019/02/17 21:09:49 setting env var 'PACK_LAYERS_DIR=/workspace'
[exporter] 2019/02/17 21:09:49 setting env var 'PACK_APP_DIR=/workspace/app'
[exporter] 2019/02/17 21:09:49 setting entrypoint '/lifecycle/launcher'
[exporter] 2019/02/17 21:09:49 setting empty cmd
[exporter] 2019/02/17 21:09:49 writing image
[exporter] 2019/02/17 21:09:49
[exporter] *** Image: hello-world-app@9b265f861002fa1018d48577fb2f78c4e32b58f72f81c8ef9f6692d2040b4d60
Successfully built image hello-world-app

The interesting bits for now are DETECTING, where the buildpack's detection script runs. This buildpack doesn't do anything but we can see it's marked as "pass" which means the buildpack's build script will get a chance to run. Down below you can see that happening under BUILDING. This again does nothing, but echo a few directories where files reside during build. Legit buildpacks would use detect to determine when they should/shouldn't run and build to install things like runtimes, servers and all the stuff necessary to run your apps.

 The output from above is a image that you can run. If you execute `docker images`, you'll see `hello-world-app` listed.

$ docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
hello-world-app                            latest              9b265f861002        6 minutes ago       164MB

You can then run it with `docker run -it --name=hello hello-world-app bash app.sh`. The app will run forever printing "Hello World!". Run `docker stop hello` to stop the app.

Hello World++

To spice things up just a little bit and show what it's like to deploy changes to our app, let's edit our `app.sh` script. Set it to this.

#!/bin/bash

while [ 1 -eq 1 ]; do
  if [ "$NAME" == "" ]; then
    echo "Hello World!"
  else
    echo "Hello $NAME!"
  fi
  sleep 5
done

This will allow us to provide a name to print. Run `pack build --buildpack $(cd ..; pwd)/samples/hello-world-buildpack/ hello-world-app` again. This will create a new image with our updated app.

Side note, if you run `docker images` you'll see that the old image is no longer used and can be removed at your leisure.

$ docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
hello-world-app                            latest              793b5e60b5ad        9 seconds ago       164MB
                                                   9b265f861002        20 minutes ago      164MB

To run the updated app image, you can use the same command `docker run -it --name=hello hello-world-app bash app.sh` and you'll see the same output. However, if you run `docker run -it -e NAME=Daniel --name=hello hello-world-app bash app.sh` you'll see our enhancement.

$ docker run -it -e NAME=Daniel --name=hello hello-world-app bash app.sh
Hello Daniel!
... 

We use Docker's ability to set environment variables to inject some data into our application. More importantly though, you can see that pushing updates and changes is the same process as you used before which makes integrating into build systems and CI/CD systems simple.

Summary

I hope you find getting started is easy. Once you get Docker & pack installed it's one command to stamp out an image using a buildpack and our application. Right now, that buildpack isn't doing anything, so it's not the best demonstration of why you'd want use buildpacks or the full power of them, but I hope this is enough to get you thinking about how this can integrate into your build flows, maybe your CI/CD system and how it can work for you.

My next post will be more practical. It'll dig into some actual buildpacks and show how you can use them to make images for actual applications, and I hope this will better showcase why you would want to use buildpacks.

Friday, May 18, 2018

WordPress Running on Cloud Foundry

I'd previously written an article on deploying WordPress on Cloud Foundry.  The process was a little clunky and has since broken, because of updates & changes to Cloud Foundry.  To remedy this, I wrote a new post which was published today on the Cloud Foundry Foundation Blog.

Here's the link -> https://www.cloudfoundry.org/blog/install-scale-wordpress-cloud-foundry-2018/


Sunday, January 08, 2017

Freenas: Migrating from VirtualBox to Bhyve

I, like many people, use the VirtualBox template jail on Freenas.  I've been using this for about a year to run Crashplan.  It's generally worked good, but the last couple of 9.10 maintenance updates have caused some problems (ex: The virtual machine 'xxxx' has terminated unexpectedly during startup with exit code 1).  See here for more on how 9.10.2 broke the VirtualBox template.

From what I've been able to read on the Freenas forums, this isn't going to get fixed and you have two options.  Don't upgrade and keep using VirtualBox or upgrade and use Bhyve.  Since I only have one VM and it's pretty simple, I opted for the latter solution.  This post is going to cover the steps that I had to do to convert from using VirtualBox to using Bhyve and iohyve without reinstalling the VM OS.

Convert the Image

The first step is to convert your old VirtualBox image to a raw disk image.  There's no direct way to import the VDI or directly use it with iohyve, but you can convert the VDI to a raw disk image and we can use that.  To convert the disk, do the following.
  1. SSH to your VirtualBox jail.
  2. Install the qemu-devel pkg with the command `pkg install qemu-devel`.
  3. Run `qemu-img convert -f vdi -O raw /path/to/virtualbox/vm.vdi /path/to/raw/image.img`
That's it.  You now have a raw image file of your old VirtualBox VM.

Setup Iohyve

There is some setup that's needed before you can use iohyve on your Freenas.  This only needs to be done once, so if you've used iohyve before on your Freenas you can skip these steps.
  1. Initialize iohyve by running `iohyve setup pool=main kmod=1 net=igb0`.  Note that you need to replace `main` with the name of your ZFS pool and `igb0` with the name of your network device.
  2. Run `ln -s /mnt/iohyve /iohyve`.  I don't know exactly why you need this symlink, but just make sure the symlink exists so that you comply with the requirement from the iohyve README.
Create a New VM

The next step is to create our new VM using iohyve.  If you're not familiar with it, iohyve is a convenient wrapper script around Bhyve which makes managing your VMs easier.  In addition to it being convenient, it's also installed out of the box on Freenas 9.10.2, which makes it a logical choice to use.

To create a new VM with iohyve, run the following commands:
  1. Create the VM, run `iohyve create crashplan 8G`.  The `8G` should be the maximum size of your disk.
  2. Configure the VM, run `iohyve set crashplan loader=grub-bhyve os=ubuntu ram=512M cpu=1`.  We set `loader=grub-bhyve` because this is a Linux VM being booted by Grub.  Set the OS to whatever you're using, my VM is using Ubuntu (if you're using LVM with Debian or Ubuntu, use `d8lvm` instead).  Then set the RAM and CPU limits, I picked 512M of RAM and one VCPU.
  3. You can now run `iohyve getall crashplan` to confirm all the settings of your VM.
Your VM is now created.  At this point, you would normally proceed to install an OS, however we've done that already in VirtualBox and so we don't want to do it again.

Import the Raw Image

The next step is to import the raw image that we exported previously.  This is simply done with `dd`.  When using iohyve, you'll see that you have a ZFS dataset for each VM.  For example, my ZFS pool is named `main` and my VM is named `crashplan`, so I have a disk at `main/iohyve/crashplan`.  In addition to that, there is a dataset for the disk.  In my case there's only one disk, so it's `disk0`.  To import the disk, we just need to `dd` the raw disk image to `disk0`.  That can be done with the following command.
dd if=/mnt/main/jails/virtualbox_jail/home/vbox/VirtualBox\ VMs/CrashPl
anBackup/CrashPlanBackup.img of=/dev/zvol/main/iohyve/crashplan/disk0
Make sure you replace `if=...` with the path to your raw image file and replace `of=...` to the path to your iohyve VM disk.  Once `dd` finishes, the VM should be ready to run.

Run Your VM

To start your VM run `iohyve start crashplan`.  Then run `iohyve console crashplan` to connect the console to the VM.  The console will display the output from the VM as it starts up.  When done you'll even be able to login to the VM through the console, but you'll probably want to connect via SSH instead.

Start VM at Boot

At this point your VM should be up and running, however if you were to restart your Freenas the VM would not be running on restart.  Fortunately it's pretty easy to make the VM start at boot.  The following instructions will make your VM start at boot.

  • In the Freenas UI, go to System -> Tunables.
  • Add a tunable.  Variable is `iohyve_enable`, value is `YES`, type is `rc.conf`. Make sure enabled is checked.
  • Add a tunable.  Variable is `iohyve_flags`, value is `kmod=1 net=igb0` (where `igb0` is the name of your network device), type is `rc.conf`.  Also make sure enabled is checked.
These settings will enable iohyve and get it started.  From there, we need to tell iohyve to start our VM.  That can be done with the following command.

iohyve set crashplan boot=1
As with all the commands I've listed, replace `crashplan` with the name of your VM.

Clean Up

While not strictly necessary, you can now clean up the old VirtualBox template jail.  Simply delete the jail a that should be it.

Summary

With any luck you should now have your VirtualBox VM functional and running on Bhyve instead.  More work may be necessary if you've got a more complicated VirtualBox VM.  For example, if you have multiple disks, multiple NICs, a GUI or were using VirtualBox specific features like shared folders.  In that case, you may want to stay on VirtualBox until official support for Bhyve is ready in Freenas 10.

Wednesday, July 15, 2015

2015 MacAdmins Conference at Penn State


The 2015 MacAdmins Conference at Penn State was last week.  It was a fantastic conference and I had the good fortune to be able to both attend and speak at the event.  If you were in one of my sessions and are looking for slides or code samples see the info below.  I believe video should be available shortly and as soon as it is I'll update this post with the relevant links.

Basic App Development with Swift



The Sys Admin's Guide to Python

If you missed the conference this year, put next year's date on your calendar now.  The 2016 MacAdmins Conference at Penn State is scheduled for June 27-30, 2016.  See you then!