top | item 7037929

Comparing Filesystem Performance in Virtual Machines

123 points| Sevein | 12 years ago |mitchellh.com | reply

45 comments

order
[+] miahi|12 years ago|reply
There is a lot of caching involved and it looks like the VM writes are not synchronous - they do not wait for the actual disk to be written. Normally nothing can beat the native access, but in a VM the "disk" is actually a sparse file that can be efficiently cached in RAM. I see the same behavior/speeds in my VMs if the virtual disk has a lot of free space and I have a lot of free RAM on the host. The speeds get "down to earth" if you fill up the host's RAM.
[+] weddpros|12 years ago|reply
3GB/s writes on a single SSD should raise more eyebrows. I dunno what was actually benchmarked, but there's a problem somewhere...
[+] mitchellh|12 years ago|reply
You're assuming the writes are actually going to a physical disk. As I mentioned in the post, the hypervisors are very likely just writing to RAM and not ever committing it to disk. Even when you `fsync()` from a VM, there is no guarantee the hypervisor puts that to disk.

If you look at the graphs, they corroborate this. The "native" disk never really exceeds 500 to 600 MB/s, which is about as fast as my SSD goes. The hypervisors, however, are exceeding multiple GB/s. It must be RAM.

Also, re: "I'm not sure what was actually benchmarked" The method of benchmarking is covered in the bottom of the post. I realize it isn't extremely detailed. If you have any questions, I'd be happy to answer.

[+] rdtsc|12 years ago|reply
Writing to /dev/null is even faster I bet.
[+] stefanha|12 years ago|reply
This benchmark is bogus because the iozone -I flag is missing. -I uses O_DIRECT to avoid the page cache.

Due to page cache usage it's hard to say what this benchmark is comparing. The I/O pattern seen by the actual disk, shared folder, or NFS may be different between benchmark runs. It all depends on amount of RAM available, state of cache, readahead, write-behind, etc.

Please rerun the benchmark with -I to get an apples-to-apples comparison.

[+] kika|12 years ago|reply
It will avoid page cache in the VM, but will not avoid cache on the host, right?
[+] azinman2|12 years ago|reply
Is it just me or do the graphs not match up to the text in several places? For example in the 64MB random file write graph (http://i.imgur.com/iGxn2H1.png) green is the vmware native according to the legend, which is clearly the highest bar graph across the board, yet he says "VirtualBox continues to outperform VMware on writes"
[+] icebraining|12 years ago|reply
He's probably talking about the Shared Folders performance.
[+] newman314|12 years ago|reply
It would have been interesting to see a comparison with Xen etc. too.
[+] rbanffy|12 years ago|reply
It's primarily a development environment test, where the host runs OSX. It would be interesting extending the test to Parallels on Macs and adding a Linux host where KVM and LXC could be used.
[+] jtreminio|12 years ago|reply
just a note: Mitchell Hashimoto is the mastermind behind Vagrant and Packer.
[+] Nux|12 years ago|reply
Would love to see in there KVM and Xenserver; you know, stuff that actual clouds run on.
[+] bradleyland|12 years ago|reply
This is pretty clearly a test of developer related tools, not production cloud server infrastructure. I'm not even sure there's an equivalent of VirtualBox/VMWare shared folders in KVM or Xen, because guests and hosts don't usually share folders in the same way that you do with these workstation virtualization tools.

...

Spoke too soon. A Google search shows there are some methods [1], but their use cases are different.

[1]: http://www.linux-kvm.org/page/9p_virtio

[+] mitchellh|12 years ago|reply
bradleyland is correct: This test was focused primarily on using VMs for development tools. This test was done on a local machine with desktop virtualization software. The opening paragraph mentions I was investigating performance for development environments. This post should not be used for any production applications, since it would make no sense.
[+] liuw|12 years ago|reply
I think you mean KVM and Xen. Xen hypervisor is open source project just like KVM while XenServer is a product that uses Xen hypervisor.

Just think of Linux kernel and Linux distributions.

[+] ajayka|12 years ago|reply
Interesting and timely article! On an Ubuntu guest (Windows host), I install the Samba server and then use the native Windows CIFS client to connect to the Ubuntu host. This gives me the advantage of vm (virtualbox) native filesystem and letting me use my windows machine to open files on the guest

Perhaps this support can be added to some later version of vagrant

[+] jtreminio|12 years ago|reply
This is what I would do when I was on Windows. The biggest (really big) downside is that the files live inside the VM and are only accessible when the VM is up and running.
[+] buster|12 years ago|reply
How is it that native is slower then virtual i/o in his tests? I don't get it... if it's only reading some cached data, it's not a real test scenario, isn't it?

So i suppose, the host system caches the reads. Also, how could it possibly be true that native writes are slower then virtual writes?

[+] icebraining|12 years ago|reply
From the article:

It is interesting that sometimes the native filesystem within the virtual machine outperforms the native filesystem on the host machine. This test uses raw read system calls with zero user-space buffering. It is very likely that the hypervisors do buffering for reads from their virtual machines, so they’re seeing better performance from not context switching to the native kernel as much. This theory is further supported by looking at the raw result data for fread benchmarks. In those tests, the native filesystem beats the virtual filesystems every time.

[+] cwyers|12 years ago|reply
On write, the VM probably reports that data is written to disk when it's written to an in-memory cache, then writes it to actual disk. So write is faster because the application is being lied to, not actual performance. That wouldn't explain the rads, though.
[+] bluedino|12 years ago|reply
Benchmarks are flawed. Combine that with 'virtual' devices and you're bound to get amazingly weird results.
[+] bryanlarsen|12 years ago|reply
It could be because native is running OS X but they're running Ubuntu inside the VM.
[+] contingencies|12 years ago|reply
In the past, industry threw hardware at things. Virtualization reduced this wastefulness somewhat, but now developers are fighting back against unreliable performance. If you are developing a performance-sensitive system, executing similar tests routinely but with real workloads should be part of your test process... and certainly occur before deployment. Third party tests on some hardware with some version of some code on some kernel, such as what we see here, are really neither here nor there.
[+] cgbystrom|12 years ago|reply
With our team, we also found shared folders performance to be too low. Our Python framework/app is very read-heavy and stat() a lot of files (the Python module loading system isn't your friend)

We ended up using the synchronization feature in PyCharm to continually rsync files from native FS into the VirtualBox instance. Huge perf improvement but a little more cumbersome for the developers. But so far it has been working good, PyCharm's sync feature does what it is supposed to.

[+] polskibus|12 years ago|reply
I would love to see MS HyperV added to this benchmark or similar.
[+] Thaxll|12 years ago|reply
No KVM / Xen ... :/
[+] k_bx|12 years ago|reply
On big repository, if you want to use zsh -- you will have to use NFS, otherwise my VirtualBox just hangs for 30 seconds until it can show me "git status" in a prompt. So only option for me is NFS (for VirtualBox).
[+] fsiefken|12 years ago|reply
I thought it was a test of different filesystems performance within the client os. Like fo example btrfs lzo vs ext4.