Proxmox and Windows VMs: Why Native CPU Could Backfire
():Running Windows virtual machines on Proxmox Virtual Environment (PVE) has improved dramatically over the past few years. Still, Windows guests occasionally suffer from noticeable performance issues that rarely affect Linux VMs. In almost all real-world cases, these problems can be traced back to three root causes.
- Missing or outdated Windows guest drivers and tools
- Suboptimal VM configuration (for example legacy VMDK disks, incorrect controllers, or legacy devices)
- An inappropriate CPU type or CPU feature set
While virtualization itself is no longer the bottleneck, modern CPU security mitigations introduced after the discovery of Spectre and Meltdown in 2017 still have a measurable performance impact. These vulnerabilities affect Intel, AMD, and ARM processors and exploit weaknesses in speculative execution — a core CPU optimization technique.
This is also why Windows virtual machines on Proxmox may feel sluggish when configured with the native host CPU type. In most cases, the root cause is not Windows itself, but the CPU flags Proxmox passes through to the guest.
Why Windows VMs Perform Worse on Proxmox
A detailed investigation by Reddit user Kobayashi Bairuo, published in his blog post, highlights this issue with extensive benchmarks. Although the article is written in Chinese, the findings are highly relevant and can be summarized clearly.
The primary cause of poor Windows VM performance on Proxmox/KVM is the exposure of specific CPU flags. These flags can trigger additional vulnerability mitigations or force Windows to use slower execution paths, resulting in higher latency and reduced responsiveness.
A common misconception is that Windows 10 or Windows 11 performance issues are mainly caused by Hyper-V or Virtualization-Based Security (VBS). While disabling Hyper-V with bcdedit /set hypervisorlaunchtype off may provide marginal improvements, it does not resolve the underlying CPU flag problem.
The Role of CPU Flags and Security Mitigations
Certain CPU flags — most notably md_clear and flush_l1d — are designed to mitigate side-channel attacks such as Meltdown. While these mitigations improve security, they can significantly reduce performance in Windows guests.
When using the host CPU type, Proxmox may expose these flags directly to the virtual machine. Windows reacts by enabling additional safeguards, which increases context-switch overhead and memory access latency.
How to Optimize Windows VM CPU Performance
- Use a custom CPU type: Avoid the raw host CPU setting. Instead, select a predefined CPU model or create a custom configuration that excludes performance-heavy mitigation flags.
- Remove md_clear explicitly: Although md_clear cannot be disabled via the args parameter, it can be removed in the CPU configuration. This often results in a noticeable latency reduction.
- Choose the closest matching CPU model: Select a CPU type that closely matches your physical hardware and enable only the flags you actually need.
Security Considerations
Disabling CPU flags such as md_clear and flush_l1d reduces protection against known side-channel attacks. While the performance gains can be substantial, this comes at the cost of reduced security isolation.
For this reason, such optimizations are not recommended for production environments handling sensitive workloads. They are best suited for test systems, gaming VMs, lab environments, or other non-critical use cases where performance is the top priority.
Avoiding Performance Issues with ProxCLMC
Manually tuning CPU types and filtering problematic CPU flags for Windows virtual machines is tedious and error-prone. ProxCLMC (Prox CPU Live Migration Checker) solves this problem by enforcing safe and performance-aware CPU configurations automatically.
ProxCLMC analyzes host CPU capabilities and ensures that Windows VMs use compatible and migration-safe CPU models instead of the raw host CPU. This prevents unnecessary exposure of performance-heavy mitigation flags such as md_clear and flush_l1d, which commonly cause latency issues.
By standardizing CPU models across nodes, ProxCLMC avoids performance regressions during live migrations and hardware changes while keeping VM behavior predictable.
More details are available on the official project page at gyptazy.com/proxclmc and in the GitHub repository at github.com/gyptazy/ProxCLMC.
Conclusion
Performance issues in Windows virtual machines on Proxmox are usually caused by CPU feature exposure rather than virtualization overhead itself. Linux guests are largely unaffected by these CPU flags, which explains the performance gap many users observe.
In practice, x86_64-v3 provides an excellent balance between performance and compatibility, with x86_64-v2-AES as a conservative fallback. On very recent CPUs, x86_64-v4 can be used to enable AVX-512 instructions where supported.
To identify the best CPU type for your environment, simply create a test VM and try different CPU models. Unsupported configurations will be rejected automatically when starting the virtual machine, making this process both safe and straightforward.