In today's globalized IT landscape the term "cloud" dominates conversations about infrastructure, applications, and deployment strategies. Public cloud providers promise scalability, flexibility, and resilience but yet many organizations still operate their own infrastructure for reasons of control, cost, and compliance...
In these environments, FreeBSD continues to play an important role as a robust, secure, and versatile operating system. One of the powerful tools in the FreeBSD ecosystem is CBSD (github). CBSD acts as a management layer that simplifies the handling of FreeBSD jails, bhyve virtual machines, and other system resources. Instead of manually working through complex configuration steps, administrators can rely on CBSD’s unified command-line and TUI interfaces to create, configure, and maintain VMs and containers with ease. In particular, CBSD makes bhyve—the native FreeBSD hypervisor—far more accessible, allowing administrators to spin up virtual machines quickly and efficiently.
While virtualization is an essential building block, networking remains equally critical. In an age where businesses and applications often span multiple data centers and geographical locations, the need for seamless connectivity grows. Traditional layer-3 routing is not always sufficient, especially when workloads must operate as though they are on the same local subnet despite being physically distributed across continents.
This is where VXLAN (Virtual Extensible LAN) technology comes into play. By creating VTEPs (VXLAN Tunnel Endpoints) on participating hosts, administrators can establish stretched layer-2 networks that extend across long distances. Such VXLAN-based overlays make it possible to connect bhyve VMs—even when they run in different data centers or countries—as if they were sitting in the same rack.
Combining CBSD's streamlined VM lifecycle management with VXLAN-based networking opens the door to building flexible, distributed infrastructures without the complexity often associated with large-scale systems. Whether you are hosting applications in Europe and Asia or simply connecting two regional sites, this approach provides a foundation for reliable, scalable, and geographically stretched virtualized environments.
This blog post introduces CBSD on FreeBSD, explains its role in managing bhyve-based VMs, and highlights how VXLAN technology helps create resilient, long-distance layer-2 networks. Together, these tools enable administrators to build modern, cloud-like infrastructures while retaining the stability and control of FreeBSD.
Btw, hopefully this post makes some of my friends in the BSD community happy to see me back there ;)
The Issue & the Goal
Running virtual machines inside a single data center is usually straightforward. They share the same local network, which makes communication easy and predictable. Problems start to appear as soon as workloads are spread across different sites or even across countries. At that point networking quickly turns into one of the hardest parts of the design. In a distributed setup virtual machines often end up in different subnets. That means traffic has to be routed between sites, which adds complexity and sometimes forces the use of NAT. Applications that expect to run on the same LAN suddenly behave differently or stop working altogether. To connect sites administrators usually rely on site-to-site VPNs such as IPsec, WireGuard, or OpenVPN. These work well for encrypted connections at the IP layer, but they do not provide a shared layer two domain. Protocols that need broadcast, multicast, or direct adjacency across an Ethernet segment are not supported in this model. As a result, clustering software, distributed storage, and other systems that depend on direct neighbor discovery often run into limitations.
There is also the issue of performance. VPNs usually add latency because all traffic is encapsulated and routed through tunnels. With every additional site the amount of configuration grows, since new tunnels and routing policies must be set up and maintained. This operational overhead can make scaling across several locations difficult. The goal in such a scenario is to make workloads behave as if they were still on the same local network, no matter where they are physically located. Virtual machines in different data centers should be able to communicate directly, exchange ARP requests, use broadcast, and rely on the same protocols they would use in a single site. What administrators want is a stretched layer two network that connects all locations into one logical LAN.
To achieve this, the network must provide an overlay that hides the complexity of the underlying transport. VXLAN technology is well suited for this purpose. By creating virtual tunnel endpoints on each host, VXLAN allows administrators to extend layer two networks across IP links. With this approach, a machine in Europe and another in Asia can operate in the same broadcast domain. From the perspective of the workloads, distance no longer matters (at least as it's not tied to latency critical applications).
Overview
All hypervisor host nodes are running FreeBSD 14.3 with cbsd (14.3.0) from pkg. The nodes are hosted in different locations across the world and do not have a direct link to each other.
o Host: cbsd01
Loc.: Ukraine
IP: 2a01:aaa:aaa:aaa::1
o Host: cbsd02
Loc.: Germany
IP: 2a13:bbb:bbb::2
o Host: cbsd03
Loc.: Australia
IP: 2600:ccc:ccc::3
Note: We can use IPv4 and/or IPv6 for the uplink network. This also makes is possible to run the workloads on IPv6 only systems.
Installation
On all participating nodes we install the required packages via pkg, create a ZFS mount an proceed with the basic cbsd initialization. This can simply be done by executing the following commands:
pkg install sudo libssh2 rsync sqlite3 git cbsd tmux
/sbin/zfs create -o mountpoint=/usr/jails -o atime=off zroot/jails
env workdir="/usr/jails" /usr/local/cbsd/sudoexec/initenv
echo 'vmm_load="YES"' >> /boot/loader.conf
passwd cbsd
Configuration
Creating a stretched layer 2 network across different locations requires us to set up VXLANs and VXLAN Tunnel Endpoints (VTEPs) between the nodes. At this point, cbsd will create such ones automatically under the hood when adding nodes but needs to be executed on each node (you can skip the host where you're executing it on, you do not need to add the local host itself). CBSD will use ssh to create peer-to-peer networks:
cbsd node mode=add node=2a01:aaa:aaa:aaa::1
cbsd node mode=add node=2a13:bbb:bbb::2
cbsd node mode=add node=2600:ccc:ccc::3
During the process, we need to enter the initially set password for the cbsd user. At this point, the authentication will be automatically switched to ssh pub-key authentication and the password for the user deactivated.
Enter password of cbsd user on 2a01:aaa:aaa:aaa::1:
Connecting to 2a01:aaa:aaa:aaa::1 via IPv6...
2a01:aaa:aaa:aaa::1 has nodename: cbsd01
Update ~cbsd/.ssh/authorized_keys...cat: /usr/jails/tmp/node-log.17178: No such file or directory
>>> !Warning! Please note! !Warning!
Having a ~cbsd/.ssh/authorized_keys file on cbsd01, anyone who gets private key (~cbsd/.ssh/id_rsa) will have the access from cbsd user!
In a multi-node configuration your commands can be run on remote servers.
Be careful with [xjb]remove-like command.
<<< !Warning! Please note! !Warning!
CBSD: Fetching inventory done: cbsd01
After establishing the connections between the nodes, we can start over creating the VPC. Nodes can be attached to the VPC as a comma separated list but you can also add all present nodes to the VPC by using the keyword all. We also need to define a VPC peer network range where each node gets an addressed assigned. This must be executed on all nodes that should be part of the VPC:
cbsd vpc mode=init node_member=all peer_network=10.100.0.0/24 vpc_name=cluster01
After defining the basic VPC configuration, we need to initialize the VTEPs (tunnel endpoint addresses). This must only be done on one of the nodes (usually the first one) and sync to the other nodes within the VPC:
cbsd vpc vpc_name=cluster01 mode=init_peers
cbsd vpc vpc_name=cluster01 mode=sync
This finally initializes the peers and provides us an overview of the peering addresses for the VPC and its peer network:
nodes: cbsd03 cbsd01 cbsd02 cbsd03
peer_network: 10.100.0.0/24
cbsd03 -> cbsd01 , ep: 10.100.0.1/30 (VX ID: 1)
cbsd01 -> cbsd03, ep: 10.100.0.2/30 (VX ID: 1)
cbsd03 -> cbsd02 , ep: 10.100.0.5/30 (VX ID: 2)
cbsd02 -> cbsd03, ep: 10.100.0.6/30 (VX ID: 2)
cbsd01 -> cbsd02 , ep: 10.100.0.9/30 (VX ID: 3)
cbsd02 -> cbsd01, ep: 10.100.0.10/30 (VX ID: 3)
Within the next step, the VXLAN interfaces need to be created which must be executed on all nodes:
cbsd vpc vpc_name=cluster01 mode=init_vxlan
We can simply validate this afterwards by running ifconfig on any node where we will find a VXLAN interface for each peer connection:
vxlan3: flags=1008843 metric 0 mtu 1450
description: tunnel-to-cbsd01
options=80020
ether 58:9c:fc:10:ff:a9
inet 10.100.0.10 netmask 0xfffffffc broadcast 10.100.0.11
groups: vxlan
vxlan vni 3 local 2a13:bbb:bbb::2:4789 remote 2a01:aaa:aaa:aaa::1:4789
media: Ethernet autoselect (autoselect )
status: active
nd6 options=29
At this point, we finally have the VTEPs including our VXLAN up and we are able to reach all endpoints on their local address. We can simply use any host and ping any other host within the VPC network. While this already works, we still need to create a bridge interface to allow jails and bhyve VMs to communicate over this network. This brdig interface must be created on all nodes in the VPC:
cbsd vpc vpc_name=cluster01 mode=init_bridge
The infrastructure related part is now finished and we can move over to the guests to make them use this resources.
VMs / Guests
Of course, we need VMs or Jails that can utilize the stretched Layer-2 network. If you don’t have any instances running yet, you can easily create them—whether they are Jails or full VMs on bhyve. For most cases, the simplest way to create these instances is through the TUI manager, which you can start by running:
# Jail
cbsd bsdconfig-tui
# bhyve VM
cbsd bconstruct-tui
In this TUI tool, you can configure various options such as CPU, memory, disk size, and other parameters for your Jail or VM. One of the quickest ways to get started is by using a preset, which automatically sets up the selected FreeBSD resources, mounts the ISO, and more. A crucial step here is the networking section where you need to make sure to select the newly created vpc-cluster01 network, as shown in the image below:

Note: For Jails you also need to define them as VNET jail.
We can now create VMs on different hyprvisor nodes across multiple locations. The only requirement is that they belong to the vpc-cluster01 network. To get an overview of all instances on any node, run cbsd bls display=nodename,jname,vm_os_type,status, which will show a summary like this:
# cbsd bls display=nodename,jname,vm_os_type,status
NODENAME JNAME VM_OS_TYPE STATUS
cbsd01 test01 freebsd On
cbsd02 test02 freebsd On
cbsd03 test03 freebsd On
Now that all VMs are connected to a stretched Layer-2 network regardless of their physical location, we can assign any IP addresses to them as long as they are within the same subnet. This ensures they can communicate with each other effortlessly. You can do this simply by setting an IP address on each VM:
VM: test01
ifconfig vtnet0 192.168.0.1/24
VM: test02
ifconfig vtnet0 192.168.0.2/24
VM: test03
ifconfig vtnet0 192.168.0.3/24
The VMs can now communicate seamlessly over the stretched layer-2 netowkr, even they are located thousands of kilometers (or miles ;)) apart. Even this guide might look a bit more complext than usual, it shows the strenghts and possibilities of FreeBSD, bhyve virtualization and networking that might make even challenging and complex things possible. This guide is heavily based on the upstream cbsd vpc guide with some further modifications and additions. I just came back to this when creating a PoC for an upcoming change at my BoxyBSD project.