Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Synology DS2015xs - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 86 82
Re-Write 85 82
Read 50 120
Re-Read 50 122
Random Read 33 70
Random Write 80 82
Backward Read 32 58
Record Re-Write 56 1719*
Stride Read 45 117
File Write 85 83
File Re-Write 85 82
File Read 35 95
File Re-Read 36 97
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

49 Comments

View All Comments

  • DCide - Friday, February 27, 2015 - link

    Ganesh, thanks for the response. Unless you really know the iperf code (I sure don't!) I don't believe you can make many conclusions based on the iperf performance, considering you hit a CPU bottleneck. There's no telling how much of that CPU went to other operations (such as test data creation/reading) rather than getting data across the pipe. Because of the bottleneck, the iperf results could easily have no relationship whatsoever to SSD RAID R/W performance across the network, which might not be bottlenecking at all (other than the 10GbE limits themselves, which is what we want).

    Could you please run a test with a couple of concurrent robocopys (assuming you can run multiple instances of robocopy)? I'm not sure the number of threads necessarily effects whether both teamed network interfaces are utilized. Please correct me if I'm wrong, but I think it's worth a try. In fact, if concurrent robocopys don't work, it might be worth trying concurrently running any other machine you have available with a 10GbE interface, to see if this ~1GB/s barrier can be broken.
  • usernametaken76 - Friday, February 27, 2015 - link

    Unless we're purchasing agents for the government, can we avoid terms like "COTS"? It has an odor of bureaucracy associated with it.
  • FriendlyUser - Saturday, February 28, 2015 - link

    I am curious to find out how it compares with the AMD-based QNAP 10G NAS (http://www.anandtech.com/show/8863/amd-enters-nas-... I suppose the AMD cores, at 2.4GHz, are much more powerful.
  • Haravikk - Saturday, February 28, 2015 - link

    I really don't know what to make of Synology; the hardware is usually pretty good, but the DSM OS just keeps me puzzled. On the one hand it seems flexible which is great, but the version of Linux is a mess, as most tools are implemented via a version of BusyBox that they seem unwilling to update, even though the version has multiple bugs with many of the tools.

    Granted you can install others, for example a full set of GNU tools, but there really shouldn't be any need to do this if they just kept it up-to-date. A lack of ZFS or even access to BTRFS is disappointing too, as it simply isn't possible to set these up yourself unless you're willing to waste a disk (since you HAVE to setup at least one volume before you could install these yourself).

    I dunno; if all I'm looking for is storage then I'm still inclined to go Drobo for an off-the-shelf solution, otherwise I'd look at a ReadyNAS system instead if I wanted more flexibility.
  • thewishy - Wednesday, March 4, 2015 - link

    I think the point you're missing is that people buying this sort of kit are doing so because they want to "Opt out" of managing this stuff themselves.
    I'm an IT professional, but this isn't my area. I want it to work out the box without much fiddling. The implementation under the hood may be ugly, but I'm not looking under the hood. For me it stores my files with a decent level of data security (No substitute for backup) and allows me to add extra / larger drivers as I need more space, and provides a decent range of supported protocols (SMB, iSCSI, HTTP, etc)
    ZFS and BRTFS are all well and good, but I'm not sure what practical advantage it would bring me.
  • edward1987 - Monday, February 22, 2016 - link

    You can get 1815+ a bit cheaper if you don't really need enterprise class:
    http://www.span.com/compare/DS1815+-vs-DS2015xs/46...
  • Asreenu - Thursday, September 14, 2017 - link

    We bought a couple of these a year ago. All of them had component failures and support is notorious for running you through hoops until you give up because you don't want to be without access to your data for so long. They have ridiculous requiresments to prove your purchase before they even reply to your question. In all three cases we ended up buying replacements and figuring out how to restore data ourselves. I would stick with netgear for the support alone because that's a major sell. Anandtech shouldn't give random ratings to things they don't have experience with. Just announcing they have support doesn't mean a thing.

Log in

Don't have an account? Sign up now