This article is being updated. Please be aware the content herein, not limited to version numbers and slight syntax changes, may not match the output from the most recent versions of Bright. This notation will be removed when the content has been updated.
Do you have customers tuned their network stack? I’m getting sluggish NFS performance from my nfs file server.
Yes, we do have customers that tune the network stacks, but every site has different needs. For example, some advanced Broadcom NICs have TCP offloading capabilities and significant caches and the introduce much less processing overhead.
You can experiment with the values of the sysctl parameters. The default values in RHEL are rather anachronistic and too conservative for modern systems.
Other things that you might want to look at are:
The rsize and wsize values while mounting as shown below:
[kerndev->category[default]->fsmounts[/home]]% show
Parameter Value
-------------------------------- ------------------------------------------------
Device $localnfsserver:/home
Dump no
Filesystem nfs
Filesystem Check 0
Mount options rsize=32768,wsize=32768,hard,intr,async
Mountpoint /home
RDMA no
Revision
[kerndev->category[default]->fsmounts[/home]]%
The MTU size also is important becuase you do not want these large blocks to be fragmented into too many small packets.
You can also increase the NFS daemons number of threads in /etc/sysconfig/nfs
.RPCNFSDCOUNT=16
Finally, you may need to tune your disk I/O performance.