How My BoxyBSD Project Boosted the Proxmox Ecosystem

When I first started BoxyBSD, I had a fairly straightforward goal in mind: Build a completely free VPS hosting platform with full IPv6 support aimed at beginners and small open-source projects. Something simple, lightweight, and accessible. But as the project evolved, I realized it was becoming much more than just a small personal project and BoxyBSD started giving back – not only to open-source in general but also to the Proxmox community in ways I hadn’t anticipated.
What surprised me the most was how deep I had to dive into architectural decisions that I initially thought wouldn’t matter that much – surprisingly it also changed my whole initial idea running everything on FreeBSD with bhyve – and let me say – it should become completely different! Managing resources efficiently across multiple VMs, fully automated deployment, monitoring system including all guests, clustering across different locations and live migrations of guests – all of it pushed me to rethink some common practices and, in the process, I ended up writing my own toolings to solve specific challenges.
Most of these tools are now potentially useful far beyond BoxyBSD. They’re lightweight, focused, and designed specifically for Proxmox environments, where I found a surprising lack of beginner-friendly automation utilities. These tools include better templating strategies for VMs, Hypervisor deployment and resource scheduling of guests like you already might know it from VMware’s DRS and everything without over-complication, and just a few wrappers around Proxmox APIs to make orchestration easier and more reliable.
Before we dive into the nitty-gritty of those tools and the design patterns I explored, let me first introduce what BoxyBSD really is and how this little idea turned into something bigger. Therefore, we’ll cover the following things:
BoxyBSD Project
About BoxyBSD

BoxyBSD is something I created out of a simple idea: What if more people had access to the tools and freedom to learn about BSD based systems without the usual walls of cost and complexity? Would this help to boost the awareness of BSD in general? Completely free VPS could be a way… What started as a small, personal project has grown into a non-profit initiative with a clear mission to support BSD based systems and the open-source communities through free, accessible technology services.
At its core, BoxyBSD is a hosting provider – or ISP, or SaaS, or simply something in between. But it’s not just about servers or uptime. It’s about opportunity! I offer free virtual machines (VPS), email accounts, and web hosting so that anyone, from curious students to seasoned sysadmins, can explore system administration, networking, and security in a real, hands-on environment. Even smaller open-source projects are hosted at BoxyBSD – serving sites, being used as CI/CD runner and build-hosts. And it’s really simply: No paywalls, no credit cards, no personal data and of course no nonsense!
Everything I do is built around the idea of empowerment. Whether someone is learning how to configure a firewall on OpenBSD, building a personal website on FreeBSD, learning Jails or just experimenting with different daemons and tools, BoxyBSD exists to support that journey. I’ve invested in reliable, modern infrastructure to make sure users get a smooth experience, but even more important is the community we’re trying to foster: one that values openness, sharing, and curiosity. And why I’m saying here we? While BoxyBSD is still completely a one-man show, there’re many ones out there sharing the same mindset and pushing the awareness of BSD or helping us out here – and I’m dedicating them a whole chapter!
This project is rooted deeply in open-source principles where I heavily believe that access to knowledge and tools should be a right, not a privilege and that people learn best when they can get their hands dirty and try things for themselves. BoxyBSD is here to make that possible, without asking for anything in return.
History
BoxyBSD didn’t start with a grand plan or a big infrastructure rather than with a single node and a simple idea: Offer on-demand FreeBSD jail instances to people who just needed a quick shell for debugging or testing. There was a barebones web interface, users dropped in their SSH key, and bam – you got a short-lived FreeBSD jail spun up for them with root access and its own dedicated IPv6 address. It was a fun little experiment, and people really seemed to love it. There are not many left overs of this early phase but you can still find a short video demonstration at YouTube.
The problem? It was too easy to use and unfortunately, also too easy to abuse! Before long, I had to pull the plug. The original model just wasn’t sustainable. It was frustrating, but also a learning experience. It made me step back and think about how to do this the right way.
A couple of months later, BoxyBSD came back with a new approach: Full-blown virtual machines! This was a game changer. Instead of ephemeral jails, users could now request persistent VPS instances running the BSD OS of their choice like FreeBSD, OpenBSD, or NetBSD. The switch to real VMs meant users could tinker with the kernel, set up full disk encryption with GELI, mess around in the bootloader and basically do anything they’d be able to do on real hardware.
It all started again on a single host, powered by FreeBSD and bhyve. And just like before, it didn’t take long for demand to outgrow the server. That’s when the idea of running a proper cluster came up. I wanted better flexibility, easier node management, and most importantly, I didn’t want users to have to shut down their machines just because I needed to reboot a host for applying security updates. In 2024 and 2025, that’s just not acceptable. If you’re interested in the details, you can find my blog post about FreeBSD, bhyve and the current situation about live migrations right here.
That’s also where I hit the limits of bhyve. It’s solid in many ways, but the lack of features like live migration was a dealbreaker. I started testing other platforms like Proxmox and XCP-ng and for a while, the infrastructure was a messy mix of all three ones. But after lots of testing and comparing, I settled on Proxmox. It checked nearly all the boxes: live migration, clustering, snapshotting, and decent API access for a proper automation. Sure, it still has its quirks and missing features, but more on that in the open-source contributions chapter.
From there, things started moving fast. Way faster than I expected. Today, BoxyBSD has provisioned over 600 VPS instances across more than 7 physical nodes in 7+ locations around the globe. It’s surreal to think about how far it came.
And it’s not just about VPS anymore. BoxyBSD is heavily IPv6-focused and provides a bunch of additional services: NAT64 and DN64 gateways, shared IPv6 load balancing for websites, and even a beta IPv6 tunnel broker which lets users get static IPv6 subnets at home through GRE, SIT, OpenVPN, or WireGuard tunnels. All for free, of course.
Looking back, I never imagined this little project would become something global. But here we are and it’s only just getting started. Important to me – keeping this service stable & usable which you can already see by my design ideas and creating ProxLB to ensure boxes can be moved around without any impacts on the user side. And this is what makes it important: Quality over quantity! And it’s great to see posts where people telling that this free service is even more stable and reliable than some other paid hosters:
From Matrix:
My free BoxyBSD VM is more reliable than my paid one! Not a single outage in a year @BoxyBSD. At **** I already had 3 this month.
If you’re interested, you can find my video interview about BoxyBSD at the BSD Cafe (and my conference slides).
Help & Sponsors
As I mentioned before, BoxyBSD started from humble beginnings. Just a single node, self-funded, running in my spare time. Every server, every piece of hardware, every bit of bandwidth – it all came out of my own pocket. I didn’t mind it; this project was a passion, something I believed in. But as interest grew and more users came on board, I quickly realized that scaling BoxyBSD on my own was going to be a challenge. At that time I also considered supporting additional hardware architectures like arm64 and rv64 (Risc-V).
To get the word out, I started sharing updates and progress on social media like Twitter (well, X now), LinkedIn, and the Fediverse. I also gave a few talks in the BSD community, especially in BSD Pub meetings and the BSD Cafe, which really helped get the project in front of the right eyes. Just to mention, the BoxyBSD project has also its account in the BSD Cafe. That turned out to be a turning point.
Some awesome people and companies in the ISP and hosting space noticed what BoxyBSD was doing and reached out to me. They liked the concept, saw the value it could bring to the broader open-source and BSD ecosystems, and thankfully wanted to support it.
I’m incredibly grateful to sponsors like ServerManagementPanel, MacArne, and Nerdscave-Hosting. These folks didn’t just offer kind words rather than they offered real help: New locations, additional compute capacity, and network bandwidth. That kind of support made it possible to expand BoxyBSD far beyond what I could have done alone. Without them, this project wouldn’t have grown this big, and definitely not this fast. Also a big thanks to my friends Arne, Moritz and Stefano for their continuous talks about realizations, ideas and help.
Their generosity didn’t just help me, it helped everyone in the BSD community who’s now able to get access to resources they might not have had otherwise. That’s what makes this even more rewarding. It’s not just about running servers. It’s about enabling learning, experimentation, and community building.
BoxyBSD might have started as a one-person effort, but it’s no longer a solo journey. Thanks to the help and sponsorship from these great partners, we’ve turned this into something that can truly make a difference. However, creating all the toolings around was and unfortunately still is a one-man show where I’m happy to get some more help. Let me show you what I’ve crafted during this project and make publicly available for the open-source community so that everyone can benefit from this!
Open-Source Contributions
ProxLB – The DRS alike LoadBalancer for Proxmox

Out of all the projects that have created and released from my work on BoxyBSD, ProxLB (ProxLB GitHub project) is probably the most well-known and interesting one. It started as a necessity within BoxyBSD, I needed something similar to VMware’s DRS. I was looking for a smart way to automatically place virtual machines on nodes with the most free resources. Ideally, it would also use live migrations and continuously rebalance the cluster to make sure resources were being used efficiently.
So, I built exactly that.
Just a few weeks after I got the first version working, the whole Broadcom–VMware situation exploded. Suddenly, many of my business customers were urgently asking for alternatives to VMware in general and most of them were looking closely at Proxmox. The catch? Even by 2025, Proxmox doesn’t have anything close to a dynamic resource scheduler or DRS-like functionality included.
I had already built one in 2024!
I polished up the project, extended its capabilities, and made it open-source in 2024. I named it ProxLB – short for Proxmox Load Balancer. It acts as a smart, cluster-aware load balancing layer on top of Proxmox, leveraging the full power of the Proxmox API to dynamically move guests based on real-time CPU usage, memory consumption, and even local disk usage.
But ProxLB doesn’t just stop at basic resource distribution. It also supports affinity and anti-affinity rules, as well as seamless host evacuations (ideal for patching or maintenance, all without any disruption to the guests).
The project took off incredibly fast and I got many warm words from the community which I really appreciate:
From the Fediverse:
@gyptazy they should hire you and integrate this into proxmox
Today, ProxLB is widely considered the go-to solution when it comes to intelligent load balancing in Proxmox environments and also my company (credativ GmbH) started supporting this project (if you’re looking for Proxmox support, feel free to reach out to us). What began as a tool I built for my own cluster management needs has grown into a key component for many Proxmox users looking to get serious about automated VM scheduling and high availability.
And just by the way, if this sounds interesting to you, you have the opportunity to join the talk “The Art of Balance – Dynamic Load Management in Proxmox with ProxLB” at the Dutch Proxmox Day 2025 (in the Netherlands) which is hosted by the Tuxis Team – thanks a lot for this!
Ansible Module proxmox_cluster for Creating Proxmox Clusters

When working towards full automation of infrastructure provisioning including ephemeral development and testing clusters, I encountered a notable gap in Ansible’s ecosystem: There wasn’t any clean and native way to create and manage Proxmox clusters. Existing solutions often relied heavily on invoking raw shell commands through the shell
or command
modules, a practice that not only clutters playbooks but also sidesteps one of Ansible’s core design principles: using well-defined modules over ad-hoc scripts.
Beyond the messiness, using shell commands to interface with Proxmox is also a security concern. Sensitive operations like cluster initialization and node joining should ideally occur through authenticated, structured API interactions and not loosely-escaped command-line strings.
To address this, I developed a new Ansible module: proxmox_cluster
. This module enables both initializing a new Proxmox cluster from a designated node and having additional nodes join the cluster and all through Proxmox’s official API. This approach provides a much cleaner and more secure alternative to previous methods. And the usage is easy at this:
You can simply create a cluster by running this task:
- name: Create a Proxmox VE Cluster
community.general.proxmox_cluster:
action: create_cluster
api_host: "{{ primary_node }}"
api_user: root@pam
api_password: password123
api_ssl_verify: false
link0: 10.10.1.1
cluster_name: "devcluster"
Easy as creating a cluster, you can join a cluster:
- name: Join a Proxmox VE Cluster
community.general.proxmox_cluster:
action: join_cluster
api_host: "{{ secondary_node }}"
api_user: root@pam
api_password: password123
master_ip: "{{ primary_node }}"
fingerprint: "{{ cluster_fingerprint }}"
cluster_name: "devcluster"
The proxmox_cluster
module is now included in the upstream Ansible Community Proxmox Repository (thanks to Felix for reviewing) and will be part of the release 1.1.0 – making it easily accessible and installable via Ansible Galaxy. This not only simplifies the process of incorporating it into your automation workflows but also encourages wider adoption and collaboration within the community.
Proxmox Cloud Image
Cloud images are typically minimal operating system snapshots designed for rapid deployment in virtualized environments. Most of them support tools like cloud-init, which help automate initial configuration like networking, SSH keys, and hostname. They’re ideal for quickly spinning up test environments or scaling infrastructure on demand.
While cloud images exist for almost every common Linux distribution, Proxmox has never been available in this form. I decided to change that by creating my own Proxmox cloud image with cloud-init support. This image offers a fast and easy way to bring up Proxmox-based test and development clusters, even inside nested virtualization setups, which is great for lab environments or CI/CD pipelines.
That said, getting Proxmox to behave like a proper cloud image isn’t entirely plug-and-play. After launching the instance, some additional configuration is required. Especially for Corosync to work properly in a dynamically assigned environment it requires adjustments to the /etc/hosts file and some additional key points. This extra setup step is probably why no one has bothered to release a Proxmox cloud image until now. But with a few tweaks, it works surprisingly well.
Terraform Provisioning of FreeBSD VMs

One of the key components in our infrastructure automation is providing a way to deploy FreeBSD virtual machines automatically in a Proxmox cluster. FreeBSD isn’t always the most straightforward OS to automate in Proxmox, especially when compared to mainstream Linux distributions. But by combining the right tools, it becomes not only possible, but efficient and scalable. I’ve built a workflow around Proxmox, Terraform, the BPG Terraform provider, and ProxLB that selects the optimal node for VM placement based on real-time cluster metrics. This stack gives us a clean, reusable way to roll out FreeBSD VMs without manual intervention.
The process starts with Terraform and the BPG provider for Proxmox. This lets us define infrastructure as code, specifying everything from VM resources to disk layout. But rather than hardcoding the target node, we let ProxLB handle that. ProxLB is a lightweight service that analyzes the current Proxmox cluster load and gives us back the best-fit node for a new VM. We query it in real time just before provisioning. This ensures that our workloads are spread evenly and intelligently across the cluster, which becomes increasingly valuable as the number of VMs scales up.
Once the best node is selected, we proceed to create a VM from a pre-prepared FreeBSD cloud image. This image includes cloud-init support so the instance can be configured automatically at boot with SSH keys, network settings, and custom metadata. Using a cloud image eliminates the need for manual OS installation and lets us stand up new FreeBSD systems in seconds. It’s clean, consistent, and fast.
To make this setup reproducible for others, I’ve published the ProxLB codebase, and a step-by-step how-to. All sources are open and documented clearly so anyone can recreate this system from scratch in their own Proxmox environment. The idea is not just to automate FreeBSD deployments, but to do so in a way that’s smart, adaptable, and easy to maintain over time.
Proxmox Self-Service Portal

Despite the popularity and power of Proxmox as a virtualization platform, the ecosystem for end-customer web panels remains surprisingly underdeveloped, particularly in the open-source space. While there are a few panels scattered across the internet, many of them fall short in practice. They have an outdated codebases, poor security practices, or simply not functioning reliably under real-world conditions. Users and providers alike often find themselves caught between underwhelming free options and costly commercial alternatives.
Among the few professional solutions that do exist, tools like ProxCP, MultiPortal, ConvoyPanel and similar platforms have carved out a niche. These provide the necessary interfaces for end-users to manage their VPS instances within Proxmox environments, but they come with licensing costs and often lack flexibility for developers who want to extend or customize the solution – just like WHC. Notably, there is still no robust, community-driven, free and open-source panel tailored specifically for Proxmox end-customers which leaves a significant gap in the market.
Thanks to Proxmox’s excellent RESTful API, building such a solution is entirely feasible. Almost every core function from creating, managing, and destroying virtual machines can be orchestrated cleanly and consistently through API calls. In essence, developing a panel becomes an exercise in wrapping the Proxmox API with a user-friendly, secure front end tailored to customer needs.
To address this gap, I began developing a new open-source customer panel written in Python using the Flask framework. The goal is to provide a clean, secure, and efficient interface that can be easily deployed by VPS providers of all sizes. One of the critical design choices I made early on was around security. Ratther than connecting the panel directly to the Proxmox nodes or clusters, I implemented a queuing mechanism. This queue mediates interactions between the front end and the backend infrastructure. Only predefined and tightly scoped actions (such as creating, starting, stopping, or resetting a VPS) are allowed. These limited actions reduce the attack surface and help avoid bugs or abuse through unrestricted API access. This mechanism is already in production use at BoxyBSD, a hosting platform where it has proven stable for the core use cases. Though the queue interface is still under development, it’s a solid proof of concept that shows how secure orchestration can be achieved without compromising flexibility.
Currently, the focus is on polishing the codebase, improving stability, and ensuring that everything is modular and well-documented. Once this refinement phase is complete, the project will be released as a fully open-source alternative to proprietary Proxmox panels – just like ProxLB!
Education Platform for BSD & IPv6
And of course the BoxyBSD platform itself! BoxyBSD is really the center of everything and it’s not just another VPS provider rather than an actual education platform built specifically for people who want to dive deep into the world of BSD-based systems where it doesn’t matter if you’re a complete beginner or someone who’s just never had a proper playground to experiment with BSD where BoxyBSD exactly offers that space. Every box is prepped for learning, not just with FreeBSD but across the whole spectrum: OpenBSD, NetBSD, MidnightBSD, DragonflyBSD, and even OpenIndiana for those who want to explore the OpenSolaris side of things. You’re not just spinning up a server – you’re being handed a whole BSD lab environment.
What’s even cooler is how it bakes IPv6 into the experience. Every box comes with its own IPv6 subnet, giving you a chance to get hands-on with modern networking from the ground up. You’re encouraged to play with firewalls, networking stacks, Jails, or even virtual machines using tools like vmm or bhyve. It’s a self-contained ecosystem where learning by doing is the default. BoxyBSD isn’t just about running systems – it’s about truly understanding them.
Conclusion

One of the most exciting things about open source is how one project can lead to another – even bigger – project. You might start with a small idea or tool, and before you know it, it becomes the foundation for something much larger. This kind of snowball effect happens all the time, and it’s part of what makes open source so powerful and rewarding.
In open source, things don’t always fit perfectly right away. That’s okay. The important thing is that we have the freedom to adjust and shape things to match what we need. We’re not stuck waiting for someone else to fix it, we can dig in and make it work. That flexibility is a big part of why open source works so well, especially when you’re solving real-world problems.
And here’s the best part: we can share what we build. When we contribute our solutions, improvements, or even just ideas, we make life a little easier for the next person. If everyone does this, we all win. The next time we face a challenge, maybe someone else has already solved it and shared their work. It becomes a cycle of helping and being helped, learning and giving back.
Open source is not just about the software rather than about the people who build, shape, and support it together. And that’s what keeps the whole ecosystem moving forward.