The vmware community is slowly waking up to NFS, and for good reason – there are many benefits over iSCSI, and it is in many ways ideal for the shared storage component of a DR installation where older servers can easily be re-purposed with new disks to provide reasonable chunks of networked storage.
But getting the most out of NFS needs some careful configuration. Besides the hardware choice, one of the main performance limiting factors I’ve found to be the underlying file system. I’ve looked in detail at VM disk performance running from NFS storage with the four main choices – so which Linux file system is the best for NFS servers providing shared storage for vmware?
The Test Rigs
I’ve performed tests on three targets:
- a Pentium-4 with a single SATA drive (without SATA NCQ)
- a Dell 2950 with four SATA drives running in RAID-10 (without SATA NCQ)
- a Dell 2950 with six SATA drives running as 3x RAID-1 volumes (with SATA NCQ), then striped in Linux with mdadm to create a large RAID-10 volume (mdadm required due to BIOS volume size restrictions)
The default file system for many 2.6 kernels, ext3 is old and reliable. But it just isn’t designed for files of the size of VMDKs and as a result can’t really keep up. Sequential write performance seemed to be limited to about 60MB/s on my test platforms, and delete performance was so bad that ESXi would time out for files as small as 12GB.
JFS has credible underpinnings from IBM, but I found in some configurations with sustained heavy write workload, the background writer process (jfscommit) could use steadily more and more CPU resource which ultimately limited throughput – and severly.
XFS has excellent support for the enormous files so needed for vmware and in most respects it is ideal – stable, fast and well proven. Sequential write performance on my test rig was nearly double that of ext3 at over 100MB/s.
When tuned a little, in particular mounting with nobarrier, delete performance seems as good as VMFS – 2TB VMDK’s could be deleted on the PowerEdge test rigs almost instantly.
I did find a corner-case when used with madm volumes, where random mixed read-write workload IO (which is of such importance with VMs) appeared to throttle the array queue depth to only 1 IO, with a devastating performance impact – effectively reducing array performance to that of a single disk.
ext4 based NFS provided sequential MB/s and random IOPS right at the top of table consistently. It has new design features that make it very much more suitable for the very large files of interest here (than it’s earlier cousin), and sequential write performance was just as good as XFS. It was immune to the corner-case affecting XFS with mdadm and I could never drive it into the CPU race condition seen with JFS.
The only downside is that the delete performance is very much lower than XFS, which then effectively limits the safe VMDK working size, depending on the speed of the NFS server and it’s workload. I found VMDK’s up to about 1.2TB could be consistently deleted OK.
It seems XFS is the choice unless using mdadm, in which case ext4 is the way to go. ext4 needs Linux kernel 2.6.28 or higher, effectively taking Debian out of the running, but Ubuntu 10.04 LTS (or 10.10) are an easy jump and it uses ext4 by default.
The only free and supported solution for vmware is the now ancient Fedora 8, so if looking at that route then only XFS should really be considered as it is too old for ext4 and ext3 is too slow.