Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Synology DS2015xs - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 86 82
Re-Write 85 82
Read 50 120
Re-Read 50 122
Random Read 33 70
Random Write 80 82
Backward Read 32 58
Record Re-Write 56 1719*
Stride Read 45 117
File Write 85 83
File Re-Write 85 82
File Read 35 95
File Re-Read 36 97
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

49 Comments

View All Comments

  • M4stakilla - Monday, March 2, 2015 - link

    I currently have a 1TB M550 and 6x 4TB desktop HDDs (will expand to 8x) in RAID5 + an offline backup (5x 4TB)

    So nothing exceeds 500MB/sec and no real upgrade plans for that either

    but it would be a a shame to waste 400MB/sec of the 500MB/sec on stupid network limitations

    4x 1Gbit teamed might be worth a look though, thanks
  • usernametaken76 - Friday, February 27, 2015 - link

    Yes, a Mac Mini with Thunderbolt and, just one example, a LaCie 5big Thunderbolt (in sizes from 10 to 30 TB) does offer exactly this, almost times 2. The Thunderbolt 2 models, even more. These are geared more towards video editing but provides every bit of the bandwidth you crave.
  • M4stakilla - Sunday, March 1, 2015 - link

    Thanks for the advice!

    Looking further into Thunderbolt... Cabling seems quite expensive though : 300+ euro for 10m, 500+ euro for 20m :(

    Out of ethical reasons, I'm trying to avoid Apple at all costs, so no Mac Mini for me...
    Also the LaCie 5big is a bit silly, as I already have the HDDs and the LaCie is including HDDs.
  • usernametaken76 - Tuesday, March 3, 2015 - link

    You can get four drive empty Thunderbolt cases from OWC. And of course a PC motherboard with Thunderbolt are available via add-in card, Asus makes a good Z97 board for about $400 with everything but the kitchen sink. Not sure why you're seeing such high prices for a 10m cable. They shouldn't be more than $50 for a 6m cable. They were working on optical cable extensions to the original copper cabling (with Mini-DP headers)..perhaps that's what you're seeing?
  • usernametaken76 - Tuesday, March 3, 2015 - link

    Make that $39 for a 2m cable. I believe you are looking at active optical cables that you wouldn't need unless you have to have a very long run for some reason. Is there a reason the storage has to be so far away from the workstation?
  • DCide - Friday, February 27, 2015 - link

    I'm unclear about the DAS tests. It appears you were testing throughput to a single Windows Server 2012 client. I would expect the ATTO read throughput to top out at around 1200MBps, and the real-world read performance to top out around 900-950MBps, as it did.

    I thought teaming didn't usually increase throughput to a single client from the same source. I imagine Synology's claim of around 1900MBps throughput will pan out if two clients are involved, perfectly inline with your real-world throughput of 950MBps to a single client.
  • usernametaken76 - Friday, February 27, 2015 - link

    A single client with multiple transfers would be treated as such.
  • usernametaken76 - Friday, February 27, 2015 - link

    That is, provided the single client also has teaming configured.
  • DCide - Friday, February 27, 2015 - link

    I think teaming was configured - that was the point of using Windows Server 2012 for the client, if I understood correctly.

    So it would appear that both tests (ATTO & real world) only consisted of a single transfer. I don't see any evidence that two Blu-ray folders were transferred concurrently, for example.
  • ganeshts - Friday, February 27, 2015 - link

    Our robocopy tests (real world) were run with MT:32 option. The two Emulex SFP+ ports on Windows Server 2012 were also teamed. In one of the screenshots, you can actually see them even treated separately (no teaming) and iPerf reporting around 2.8 Gbps each. In the teamed case, iPerf was reporting around 5 Gbps. iPerf was run with 16 simultaneous transfers.

    I will continue to do more experiments with other NAS units to put things in perspective in future reviews. As of now, this is a single data point for the Synology DS2015xs.

Log in

Don't have an account? Sign up now