top | item 39301130

(no title)

blitzclone | 2 years ago

Well, KVM is used by Google and AWS and others for their clouds. As such, there are a lot of eyes on KVM code. The vboxdrv kernel module that provides the same functionality in vanilla VBox definitely has fewer people looking at it. It also has anti-features, such as code upload from the userspace VirtualBox process to the kernel. This is also the largest security issue with vanilla VBox, because a lot of emulation code runs directly in the kernel.

From a performance perspective, it's a bit more complicated. KVM has support for modern virtualization features (Intel APICv, AMD AVIC, etc) that vanilla VBox lacks. You get these in the VirtualBox/KVM version. On the other hand, vanilla VBox emulates most devices in the kernel (see above). So SATA emulation in vanilla VBox is very fast compared to KVM/Qemu or KVM/VirtualBox for a bit unfair reasons. Modern devices, such as virtio or NVMe, are not as impacted by that.

tl;dr So the performance you get depends on your workload. If it's very interrupt heavy, VirtualBox/KVM will win. If it uses antiquated virtual devices (SATA), vanilla VirtualBox (with vboxdrv) will have an edge.

discuss

order

peterhull90|2 years ago

And could one swap between the two backends with the same VM image (.vbox +.vdi) to see which one gave the better performance?