top | item 45984126

(no title)

tlamponi | 3 months ago

Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.

The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.

discuss

order

nyrikki|3 months ago

Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.

I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.

I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.

IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...

tlamponi|3 months ago

PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.

We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.

If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.