Creating an NFS Server on Debian
Debian Linux is ideal for use as an NFS server for ESX and ESXi to provide anything from a high-speed datastore for production VMs, to a backup datastore for a home ESX box running on an old Pentium III PC.
Installing Debian as an ESX(i) VM for NFS
By installing Debian itself as a VM on an ESX(i) host, full hardware monitoring and NIC failover is also provided by ESX(i), greatly simplifying the installation. The real-time monitoring in particular with ESXi 4.1 is comprehensive, providing graphs of storage latencies, throughputs, CPU utilisation etc and also hardware health status on supported platforms (see Esx-health.pl for a script to check the hardware health periodically).
Host requirements are essentially simply any box that will run ESX(i). If the box has hardware RAID, it must also have battery-backed write-cache fitted. Ideally,
- Dual core CPU
- 4GB RAM
- 3 GbE NICs (two for NFS, one for host management)
- Hardware RAID with BBWC
- Redundant PSUs
Such a configuration should be able to max-out a GbE network, transferring at over 100MB/s.
The ESX(i) installation should have two vSwitches defined, one for host management with one or more NICs and a second vSwitch for NFS traffic with two or more NICs.
Debian VM Settings
- Linux "Debian GNU/Linux 5 (32-bit)"
- 2 vCPU (1 vCPU on a dual-core server)
- RAM per the table below
- 2 NICs (one on each vSwitch)
- 4GB thick provisioned disk for OS installation
- Second disk, thick-provisioned, of size required up to 2TB
- Additional thick-provisioned disks if total capacity is > 2TB
Where the total capacity to be provided exceeds 2TB, the underlying RAID controller must be able to present LUNs no larger than 2TB to ESX(i). However some controllers, notably Dell's Perc 5i and 6i, cannot divide RAID-10 or RAID-50 volumes.
Where such a problem exists, software RAID with the Debian VM can be used to run, for example, RAID-0 over several virtual disks that are themselves provided by datastores on RAID-1 volumes, thereby providing the desired RAID-10 and without the 2TB capacity restriction. See Running Software RAID on Debian.
|Host physical RAM||Debian VM RAM|
|6GB or more||4 GB|
Installing Debian on Bare Metal
Particularly when installing on older systems, a bare-metal installation will probably be preferred (or necessary). Hardware requirements are pretty basic:
- Pentium-Pro or better CPU
- 192MB RAM (the NFS server is unstable when used with ESXi with less RAM)
Hardware management capabilities will be dependent on the sensors available in the machine. For guidance see:
- Hardware Monitoring with Debian
- Monitoring SMART Status on Debian
- Enabling power management on Debian
- Email Alerting with Debian
Debian can be run from any bootable device including simple flash storage - see Tuning Debian to Run From Flash.
A Debian ISO can be downloaded from http://www.debian.org/distrib/netinst.
- Mostly defaults can be accepted
- A password for the root account must be defined, plus one additional account
- Deselect all predefined software selections except "Standard System"
If SSH access will be required, this must be installed:
apt-get install ssh
Installing NFS Server
apt-get install nfs-kernel-server nfs-common portmap
If prompted, choose to keep existing versions of files.
Partitions must be created manually to ensure correct alignment when used with Advanced Format SATA drives or any form of hardware RAID. See Running VMs from NFS Datastores for specific instructions.
Formatting - File System Choice
There are several options for the file system, noteably ext3, XFS, reiser-fs and JFS. The choice depends in part on the hardware available:
- ext3 generally does not perform well in writes so should be avoided
- JFS must be used for NFS servers operating from single SATA drives, since creating or deleting large VMDK files will fail due to time-out with the other file systems.
|File System||Strengths||Weaknesses||Typical Use|
|ext3||Very stable, default for most distros||Slowest||High-speed hardware|
|XFS||Faster than ext3||Can suffer corruption in power failures||Considered the 'default' choice for NFS datastores|
|reiser-fs||Faster than XFS||Not now maintained|
|JFS||Fastest file create and delete
Lowest CPU usage
|Needs to be installed; easily corrupted by power failure (see Recovering JFS Partitions following Power Failure)||NFS Servers without hardware RAID|
For specifics on creating and tuning each file system, see the respective articles:
- Creating and Tuning ext3 Partitions
- Creating and Tuning XFS Partitions
- Creating and Tuning Reiser-FS Partitions
- Creating and Tuning JFS Partitions
A general mount folder should be created with a specific folder within this for each share (one for each partition created). For example,
mkdir /mnt/share1 mkdir /mnt/share2
The mounts are then defined by adding a line for each to /etc/fstab using nano, for example (for an ext3 partition),
/dev/sdb1 /mnt/share1 ext3 rw,async,noatime 0 2
The NFS shares themselves are defined in /etc/exports, which can be edited again using nano. The format of each line is,
192.168.10.0/255.255.255.0 in this example defines access to any host on this subnet. Where more granular access control is required, a separate line is added for each host (or subnet). Requests to mount shares from hosts or subnets not specified will be denied.