Automated VM provisioning with Packer

Nicolas Vogt
6 min readMay 8, 2021

--

Automated VM creation pipeline — Interact with VMware VSphere

Have you ever wondered how Cloud Providers were able to provision new environments so quickly? Have you ever dreamt of a one-click button to provision new virtual machines on your on-Premise Infrastructure? I came up with a solution and I will try to give you some hints in a mini-serie. This one article will focus on how to interact with VMWare VSphere and the use of HashiCorp Packer.

Why Packer?

Packer is a tool provided by HashiCorp. Its true benefit is to be able to plug-in to many different environments (VMWare, Docker, AWS, Azure, …). Migrating to a Cloud Platform is very trendy, but it is not suitable to every workload. You will probably have to manage a hybrid infrastructure, with both on-Premise and Cloud Infrastructures. This is were Packer thrives, you can decide to create a new VM on you on-Premise VMware VCenter server or to start a brand new EC2 instance on Amazon AWS. It gives you enough agility to provision your workload to the most suitable Platform. Once you has established your pipeline, all you have to do is to define rules or let your users choose the target platform.

The Idea

To make the pipeline scalable, I have opted for an Event-Driven approach. All the requests are stacked into a queue. My packer process is embedded in a Kubernetes deployment object and is listening to this queue waiting for new demands. I have a cronjob watching the queue size, and scale up or down the deployment given a threshold. It will probably be more explicit with a visual presentation. Let start simple, here is a view of the Packer process embedded in a container.

The packer provision process

I am also using HashiCorp Vault to store all the secrets in a secure manner and Banzai Cloud to make sure there is no password in plain text in my Kubernetes deployments.

Also, the installation process requires a kickstart file since I am deploying Centos/RedHat virtual machines. Any Linux distribution comes with a text mode installation process, and you can give a preseed file in order to generalize your installation. This is what I do, but the specificity here is that my Packer won’t set the IP address neither will it set the password. This is due to my company’s network segmentation, the production environments are located in different sub-networks with dedicated VLANs and my Kubernetes hosts do have restricted access to those sub-networks. Thus I cannot use the Packer build-in ssh feature to set-up the environment thoroughly, so I have chosen a combination of Packer, Kickstart and Ansible AWX to make the complete setup, from zero to production.

Ok let’s review our architecture diagram now, with a broader view.

Architectural View of the overall provisioning process

I forgot to mention that my queues are handled by RabbitMQ, you will find the helm chart to deploy it in cluster in my GitHub Repository.

In future Post I will show you how I handle the incoming requests with an API on Apache Nifi before they arrive in the process queue.

A little piece of code

Now that we have a better view of what we have to do, let’s write a piece of code. I hope your expectations are not too high here, because it’s high school Python programming. Since I am not a developer, it will be very basic.

We will start with the main.py. First the imports :

Then the logger configuration :

Packer requires that you provide the path to the exec and the configuration file :

The configuration file is a jinga formatted template we will come to this in a minute. In my case, I set default variables to prevent missing parameters :

And then I get my VSphere credentials from my env variables :

I am using the pikalibrary to access to my RabbitMQ queue. The following piece of code should be valid until the queue server uses AMQP.

Note that credentials are set from environment variables. It will be passed when we will start our container.

Now we can start consuming our queue :

If nothing is returned by the queue, it will just sleep for fifteen seconds, otherwise it will execute the following :

Basically, once the message is received, I load the parameters in the paramdict depending whether they are provided or not, otherwise I take it from the default dict.

Then, I create a packerobject, call the buildmethod and wait for the process to end (you can also choose an asynchronous mode). At the end, I publish a new message in the outqueue.

The packer configuration file

In order to run the build via packer, the python-packer library opens a sh command to call the executable as you would do if you had to do it manually. You still have to provide a configuration file, here is what it looks like :

The double brackets specify input parameters. We have to provide them when we call packer so it can substitute them with the given values, this is what we did a little earlier when when called the build method.

As you see, we are using the vsphere-iso builder. Packer will connect to your vcenter server and start to build the environment with the given parameters. The little trick is that as I do not want Packer to open an SSH connection to the remote virtual machine. For this reason, I had to set the none value to the communicator parameter :

The installation process takes a little more than 5 minutes which is the default timeout value. I have to push the shutdown_timeout to fifteen minutes in order to let it enough time to complete. For some mysterious reason I also have to set the shutdown_command parameters, otherwise Packer fail into error.

You will probably have to adapt this part to your infrastructure. I came up with this solution after struggling to reach my kickstart file. The kickstart is served by a Flask App behind a load balancer and is not reachable via is IP but via his canonical name only. The Flask App modify the content returned with the IP address in the url.

In most cases, people have a dhcp server which to serve IP address dynamically during the boot. I do not have one on every target sub-networks, that is why I have to force the bootloader with an IP address.

Container Building

Now that we wrote our code, we have to embed it into a docker container.

First we need to create a requirements.txt file like this :

And a Dockerfile:

I chose python-alpine, but I have considered python-slim in order to reduce the footprint. On the first versions I used the hvac library to access the vault, and it refused to build on slim. I should try it again now that I have externalized the vault connection.

Then run the docker build command and push it into your registry.

Kubernetes Deployment

One thing you have to consider before using this Helm Chart is that I store my secrets into a HashiCorp Vault and I set up banzai-cloud in order to get the secrets when I start a container. There are some specificities on the deployment if you look at the annotations part. Plus it will require a serviceAccount and configuration on the vault side to make it works. So you will probably have to remove some parts to fit your organisation.

The values.yaml should look like this :

This is all for now, you should be able to build your own packer builder now. Next time I will talk about how to interact with Ansible AWX.

--

--

Nicolas Vogt
Nicolas Vogt

Written by Nicolas Vogt

Curious, most of the time, eager to learn something new when I’m not

No responses yet