Introducing Storage Management for Proxmox Nodes & Clusters with the new Ansible Module proxmox_storage

Managing Proxmox storage resources at scale has traditionally been a cumbersome task. In clustered environments where consistency, reliability, and speed are critical, manually adding or removing storage definitions on each node wastes valuable time and introduces the risk of human error. Imagine configuring NFS shares, CephFS mounts, iSCSI targets or Proxmox Backup Server repositories across dozens or even hundreds of nodes, each in different locations, and having to repeat the same steps manually or with ad-hoc scripts. It slows down operations, disrupts automation pipelines, and often leads to inconsistencies between nodes.
Until now, there was no clean, supported, and API-driven way to manage storage across Proxmox environments directly within Ansible. This is exactly the gap the new proxmox_storage
module fills. Recently added to the upstream community.proxmox
Ansible collection, this module introduces a structured and reliable approach to provisioning storage on single Proxmox VE nodes or entire clusters, fully aligned with infrastructure-as-code principles.
The proxmox_storage
module supports adding or removing various storage types including CephFS, NFS, CIFS, iSCSI, and Proxmox Backup Server targets through your Ansible playbooks. Whether you are onboarding a single new node or synchronizing a storage change across your entire cluster, it is now just another task in your playbook. No more manual configuration, no risky post-deployment adjustments, and no brittle one-off scripts. Everything is fully automated via the Proxmox API.
It is not just about adding storage. The module ensures idempotent configuration, meaning your storage definitions remain consistent across all nodes and deployments even when you re-run playbooks or update your infrastructure. This allows for repeatable and predictable storage management in CI/CD-driven or large-scale enterprise environments.
Features
The proxmox_storage module provides a unified, API-driven approach to managing storage resources in both individual Proxmox VE nodes and full Proxmox clusters. It eliminates the need for repetitive manual configuration by allowing storage definitions to be created, updated, or removed directly within Ansible playbooks.
At its core, the module supports the most common storage backends used in Proxmox environments:
- CephFS
- NFS
- CIFS (SMB)
- iSCSI
- Proxmox Backup Server (PBS)
Each storage type comes with dedicated configuration options, ensuring you can supply the exact connection details, authentication parameters, and storage-specific settings required for a successful integration.
Key capabilities include:
- Cluster-wide or node-specific enablement
Define storage that is available to every node in the cluster or limit it to specific hosts using thenodes
parameter. This flexibility makes it suitable for both small deployments and large, segmented clusters. - Idempotent storage management
By specifyingstate: present
orstate: absent
, the module ensures storage configurations match the desired state every time a playbook is run. If a storage definition already matches the provided settings, no changes are made. - Detailed configuration for each storage type
- CephFS: Configure monhosts, authentication credentials, filesystem name, paths, and keyrings.
- CIFS: Set server address, share name, domain, SMB version, and user credentials.
- NFS: Provide server address, export path, and optional NFS mount options.
- iSCSI: Configure portal addresses and target names for block storage integration.
- PBS: Define server, credentials, datastore, and fingerprint for secure backup storage.
- Content type specification
Control exactly what the storage is used for by settingcontent
to one or more options such as VM disk images, container root directories, ISOs, templates, backups, imports, or snippets. - Full check mode support
The module supports Ansible’scheck_mode
, allowing you to preview changes without applying them. This is useful for validating infrastructure-as-code workflows.
With these features, proxmox_storage
becomes an essential tool for keeping storage definitions consistent across environments. Whether you are adding a new backup repository to every node, mounting a shared NFS export for VM images, or removing obsolete storage definitions, the module integrates seamlessly into automated provisioning and maintenance pipelines.
Examples
To help users get started quickly, I’ve also included practical examples showing how to integrate the proxmox_storage module into Ansible roles. These examples demonstrate how to add and remove different kind of storage types.
Adding Storage to Proxmox
- name: Add PBS storage to Proxmox VE Cluster
community.proxmox.proxmox_storage:
api_host: proxmoxhost
api_user: root@pam
api_password: password123
validate_certs: false
nodes: ["de-cgn01-virt01", "de-cgn01-virt02"]
state: present
name: backup-backupserver01
type: pbs
pbs_options:
server: proxmox-backup-server.example.com
username: backup@pbs
password: password123
datastore: backup
fingerprint: "F3:04:D2:C1:33:B7:35:B9:88:D8:7A:24:85:21:DC:75:EE:7C:A5:2A:55:2D:99:38:6B:48:5E:CA:0D:E3:FE:66"
export: "/mnt/storage01/b01pbs01"
content: ["backup"]
- name: Add NFS storage to Proxmox VE Cluster
community.proxmox.proxmox_storage:
api_host: proxmoxhost
api_user: root@pam
api_password: password123
validate_certs: false
nodes: ["de-cgn01-virt01", "de-cgn01-virt02"]
state: present
name: net-nfsshare01
type: nfs
nfs_options:
server: 10.10.10.94
export: "/mnt/storage01/s01nfs01"
content: ["rootdir", "images"]
- name: Add iSCSI storage to Proxmox VE Cluster
community.proxmox.proxmox_storage:
api_host: proxmoxhost
api_user: root@pam
api_password: password123
validate_certs: false
nodes: ["de-cgn01-virt01", "de-cgn01-virt02", "de-cgn01-virt03"]
state: present
type: iscsi
name: net-iscsi01
iscsi_options:
portal: 10.10.10.94
target: "iqn.2005-10.org.freenas.ctl:s01-isci01"
content: ["rootdir", "images"]
Removing storage from a PVE node or cluster:
- name: Remove storage from Proxmox VE Cluster
community.proxmox.proxmox_storage:
api_host: proxmoxhost
api_user: root@pam
api_password: password123
validate_certs: false
state: absent
name: net-nfsshare01
type: nfs
Conclusion
The introduction of the proxmox_storage module significantly expands the range of Proxmox VE operations that can be managed through Ansible. By adding native support for provisioning and removing storage across clusters or individual nodes, Ansible becomes an even more powerful orchestrator for Proxmox environments. Tasks that once required manual interaction, custom scripts, or repeated API calls are now standardized, idempotent, and fully integrated into infrastructure-as-code workflows.
This addition demonstrates one of Ansible’s greatest strengths: its ability to grow through community-driven development. Just like the proxmox_node module solved a long-standing licensing challenge, proxmox_storage fills another important automation gap. It makes large-scale Proxmox management more efficient, consistent, and reliable. Together, these modules bring the same level of control and repeatability to storage management that Ansible users have come to expect from other parts of their automation stack.
It also underlines the real power of open-source solutions. The code is available to everyone, contributions flow back upstream, and new functionality becomes available to the whole community, not just a single organization. This kind of collaboration enables sysadmins, DevOps engineers, and infrastructure teams to recreate, adapt, and build upon proven solutions, whether they are working with a small lab or a globally distributed cluster of hundreds of nodes. In practice, this means fewer manual steps, reduced configuration drift, and a faster path from concept to production. It also ensures that automation workflows remain transparent, auditable, and maintainable over time. When modules like proxmox_storage and proxmox_node are combined with other building blocks from the community.proxmox collection, Ansible can handle everything from initial provisioning and licensing to storage setup, networking, and ongoing lifecycle management, all from a single, version-controlled source of truth.
In short, this is more than just a new module. It is a step forward in making Proxmox management truly hands-off, repeatable, and enterprise-ready while staying true to the principles of open-source: shared knowledge, collaborative improvement, and the freedom to adapt automation to your exact needs. And if you’re interested how this could look like, you can find here my video demonstration of a fully automated Proxmox VE cluster.