gyptazy

DevOps

Developer

IT Consultant

gyptazy

DevOps

Developer

IT Consultant

Blog Post

HowTo: Automated Deployment of FreeBSD VMs in Proxmox with ProxLB and Terraform

HowTo: Automated Deployment of FreeBSD VMs in Proxmox with ProxLB and Terraform

In today’s fast-paced IT environments, automation is essential for maintaining efficiency and staying competitive. Whether you’re managing a small-scale infrastructure or an enterprise-level system, the ability to deploy virtual machines (VMs) quickly, consistently, and with minimal manual intervention can be transformative. This is where tools like ProxLB and Terraform come into play, offering a powerful solution for automating the deployment and management of VMs.

Why Terraform and not Ansible? I’m aware that Ansible is a great tool – also for such things – but Ansible might be slow in some cases with too many tasks and not writing own custom modules that improve the overall handling. Terraform is often the better choice when it comes just to build up a base infrastructure from scratch where Ansible can take over after this baseline has been set. So, let’s have a look at the other tools used here.

ProxLB is a robust load balancer specifically designed for Proxmox environments, enabling seamless resource distribution across multiple nodes in a Proxmox cluster. It also redistributes the VM’s disks on shared storages if needed. This ensures high availability and optimal resource utilization. On the other hand, Terraform is an open-source infrastructure-as-code tool that allows you to define, provision, and manage infrastructure using simple, declarative configuration files.

In this guide, we will explore how to leverage ProxLB and Terraform to automate VM deployments. From automatically finding the best node for a new VM placement within the cluster, setting up your environment to writing Terraform scripts and deploying VMs managed by ProxLB, we’ll cover everything you need to know to streamline this process.

The benefits of automating VM deployments with ProxLB and Terraform are significant. First, automation accelerates the deployment process, allowing you to spin up new environments quickly and with consistent configurations, reducing the potential for human error. As your infrastructure grows, managing it becomes increasingly complex; ProxLB simplifies this by efficiently balancing the load across your VMs, making scaling your system easier. Finally, by codifying your infrastructure with Terraform, you achieve consistency and repeatability, ensuring that every deployment adheres to the same standards and practices, which is crucial for maintaining reliability and security in your systems.

Goal
The primary goal of this solution is to build a fully automated way of creating FreeBSD based VMs on a Proxmox cluster which can also be used in other projects like my BoxyBSD one – of course, this can also be done by any other OS that comes with cloud images and supports cloud-init. By creating new VMs on the Proxmox cluster, the best perfect matching node in the cluster for placing a new VM object should be evaluated. Afterwards, the VM will be placed and created by a given image by using Terraform with the BPG provider. In this case, a counter in the hostname (e.g., managed-vm30.boxybsd.com) will be dynamically incremented in an automated way. Also the the cluster node for placing the VM will be provided in a dynamic way to Terraform. To make this work, we will use a very simple shell script, which will be called and orchestrates all other tools until the new VM object has been created:

* create_vm.sh
  * Creating an incremented number
* Starting ProxLB (with best node option)
  * Return the best node for a new VM placement
* Starting Terraform with dynamic values for hostname and node

All required content and manifests for this how-to can be found in my git at brew.bsd.cafe.

Requirements
This howto assumes that you already have a dedicated Debian based management VM in place which can reach the Proxmox cluster by the API. For further usage we will also require the ProxLB and  Terraform with the BPG provider. All required steps will be described in detail.

Installation
General
Before installing all the major toolings, we should ensure to have all other things in place. So, we don’t need to install additional packages later anymore and can proceed straight-forward with the setup. As a result, we simply install the following packages:

apt-get install git apt-transport-https wget

ProxLB
ProxLB is a great tool when it comes to Proxmox clusters where it offers many enterprise features that you might already know from the VMware ESX world but are not directly available in Proxmox itself. When it comes to find a suitable node according to memory, cpu or disk configuration for a placement of VMs or CTs or also ongoing re-balancing of the cluster to ensure the ressources are always balanced in a right way, ProxLB finally steps in. Within our pipeline, ProxLB will be used to obtain the best node accroding to the current resource usage in the cluster, to place a new VM object. In the configuration will use memory as the balancing algorythm, where the node with the most free memory will be selected. ProxLB can run on any system that supports Python3. Therefore, we can simply install it on our management system – all further commands will fully use the Proxmox API. It does not need any further ssh access.

You can find the ProxLB installation guide here but we will quickly cover all needed actions to install and configure it right here.

echo "deb https://repo.gyptazy.ch/ /" > /etc/apt/sources.list.d/proxlb.list
wget -O /etc/apt/trusted.gpg.d/proxlb.asc https://repo.gyptazy.ch/repo/KEY.gpg
apt-get update && apt-get -y install proxlb

Afterwards, ProxLB can be configured. The configuration file is placed in /etc/proxlb/proxlb.con. For the desired use case, a simple configuration is created:

[proxmox]
api_host: cluster-vip02.gyptazy.ch
api_user: proxlb@pam
api_pass: P@ssw0rd!
verify_ssl: 1
[vm_balancing]
enable: 1
method: memory
mode: used
[service]
daemon: 0
schedule: 24
log_verbosity: CRITICAL

Afterwards, we can already use ProxLB to get the best node for placing a new VM in our cluster. For such use cases, ProxLB provides the option -b (–best-node) to get the best nodes according to the given configuration setup. It’s good to give it a try:

$> proxlb -b
virt01

Terraform
Terraform is an open-source Infrastructure as a Code (IaC) software, developed by Hashicorp. It allows developers and operators to define and provide data center infrastructure using a declarative configuration language. Therefore, we can declare and configure our desired state of new VMs (based on FreeBSD or any other operating system) in our Proxmox cluster infrastructure.

First, we start to install it on our management system just next to ProxLB.

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/terraform.list
wget -O /etc/apt/trusted.gpg.d/terraform.asc https://apt.releases.hashicorp.com/gpg
apt-get update && apt-get -y install terraform

We can now use Terraform. For a quick-dive, we can use a pre-configured repository containing all the information and configuration that are required to deploy a VM in a Proxmox cluster by using the BPG Proxmox provider for Terraform.

mkdir -p ~/training/proxmox/terraform_deployment
cd ~/training/proxmox/terraform_deployment
git clone https://brew.bsd.cafe/gyptazy/proxmox-terraform-bpg-freebsd-vm
cd proxmox-terraform-bpg-freebsd-vm

This provides us the following directory structure:

.
├── README.md
├── cloud-init
│   └── user-data.yml
├── create_vm.sh
├── files.tf
├── main.tf
├── variables.tf
├── virtual_machines.tf
└── vmcount.txt

Attached, you can also see the content of the files:

files.tf

resource "proxmox_virtual_environment_file" "freebsd14_cloud" {
    content_type = "iso"
    datastore_id = "local"
    node_name = "virt01"
    source_file {
      path = "https://cdn.gyptazy.com/files/os/freebsd/images/cloud/freebsd-14.0-ufs-2024-05-04.qcow2"
      file_name = "freebsd-14.0-ufs-2024-05-04.img"
      checksum = "f472fa7af26aae23fdc92f4a729f083a6a10ce43b1f2049573382bc755c81750"
    }
}

resource "proxmox_virtual_environment_file" "cloud_config" {
    content_type = "snippets"
    datastore_id = "local"
    node_name = "virt01"
    source_file {
      path = "cloud-init/user-data.yml"
    }
}

main.tf

terraform {
    required_providers {
      proxmox = {
      source = "bpg/proxmox"
      version = "0.63.0"
      }
    }
}
provider "proxmox" {
    endpoint = "https:/cluster-vip02.gyptazy.ch:8006/"
    username = "root@pam"
    password = "p4ssw0rd!"
    insecure = true
    ssh {
      agent = false
    }
}

virtual_machines.tf

resource "proxmox_virtual_environment_vm" "vm_object" {
    name = var.hostname
    description = "Managed by Terraform"
    tags = ["terraform"]
    node_name = var.node
    cpu {
      cores = 1
    }
    memory {
      dedicated = 2048
    }
    agent {
      enabled = true
    }
    network_device {
      bridge = "vmbr0"
    }
    disk {
      datastore_id = "local-lvm"
      file_id = proxmox_virtual_environment_file.freebsd14_cloud.id
      interface = "scsi0"
      size = 32
    }
    operating_system {
      type = "l26"
    }
    serial_device {}
    initialization {
      datastore_id = "local-lvm"
      user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
      ip_config {
        ipv4 {
          address = "dhcp"
        }
      }
    }
}

variables.tf

variable "hostname" {
    type = string
}
variable "node" {
    type = string
}

cloud-init/user-data.tf

package_upgrade: true
packages:
   - qemu-guest-agent

timezone: Europe/Berlin
users:
   - name: gyptazy
   passwd: $6$F2NA9T240RZLC7cm$9JcyNJ6vHw0FidyCH7YlDGVE3.4y9K/KtWoCIcP1zb7Kdox/cPWwudNybjX.km11mmd2El5jDKTe0eLxI7s5l.
   lock-passwd: false
   sudo: ALL=(ALL) NOPASSWD:ALL
   shell: /bin/sh
   ssh_authorized_keys:
      - ssh-ed25519 AAAAC3NzaF1D4NTE5AAAAIJe45xj6fsadGHesfdadsJFGZxasdfads/gjVmr gyptazy

power_state:
   delay: now
   mode: reboot
   message: Rebooting after cloud-init completion
   condition: true

Note: The generated password is admin123 and should be replaced! You can also generate your own one with: mkpasswd –method=SHA-512 –rounds=4096

After setting up the configuration, the Terraform BPG Proxmox provider can simply be installed by running the init command. Afterwards, we can directly run the plan command to validate the estimated changes on the infrastructure:

terraform init
terraform plan

If the created plan looks good to you, an initial apply can be initiated by simply running the apply command. However, we also want to pass the hostname and the node name to place the VM directly by the cli. Therefore, we can simply add the -var command including our key/values to the apply command:

terraform apply -var="hostname=managed-vm33.boxybsd.com" -var="node=virt01"

Automate VM Creation
n the world of DevOps, the next step is to automate and integrate all the components you’ve built. There are various tools available for this, depending on the complexity and requirements of your environment. You might choose Ansible for configuration management, a CI/CD pipeline like ArgoCD or Jenkins for continuous integration and deployment, or even a straightforward shell script for simpler tasks.

In this post, we’ll walk through a basic example using a shell script to demonstrate how automation can be implemented. While this is a simple illustration, it serves as a starting point that can be expanded upon. For instance, you could enhance the script to allow dynamic hostname assignment, IP configuration, or even switch between different operating systems.

Our focus here keeps on a shell script, showing you how to tie together all the various elements of your infrastructure, giving you the flexibility to manage your environment in a more streamlined and efficient way.

#!/bin/bash

PROXMOX_NODE=$(/bin/proxlb -b)
PROXMOX_VM_ID=$(<vmcount.txt)
echo "Best Proxmox node for next VM placement: $PROXMOX_NODE"
echo "Next VM with ID managed-vm$PROXMOX_VM_ID.boxybsd.com will be placed on node: $PROXMOX_NODE"
echo "$((++PROXMOX_VM_ID))" > vmcount.txt

terraform apply -var="hostname=managed-vm$PROXMOX_VM_ID.boxybsd.com" -var="node=$PROXMOX_NODE"

With this very simple shell script we finally reach our goal. Within the first task, ProxLB evaluates the best node in the cluster for a new VM placement. Afterwards, we increment the counter for the numbers, which will create us a dynamic hostname for the VM like managed-vmXX.boxybsd.com, where XX is to be replaced by a uniq and dynamically generated number. Afterwards, Terraform is called in addition to our variables to create the new VM object in the cluster. Of course, this can easily get more complex and can also be integrated into further CI/CD pipelines such like Jenkins, ArgoCD but you can also use Ansible with a dynamic inventory which orchestrates all the things.

Conclusion
Imagine a system where creating VMs is as simple as pressing a button—no fuss, no manual work. This solution brings that vision to life, automating the entire process within a Proxmox cluster, and even lending its magic to projects like BoxyBSD. It finds the perfect spot for each new VM, ensuring everything fits just right. Terraform steps in to bring these VMs to life, with names and locations chosen on the fly. A simple shell script is the unseen conductor, ensuring every note is played perfectly. It’s a symphony of automation, orchestrating a seamless, efficient, and dynamic creation of virtual worlds.

Taggs: