Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Synology DS2015xs - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 86 82
Re-Write 85 82
Read 50 120
Re-Read 50 122
Random Read 33 70
Random Write 80 82
Backward Read 32 58
Record Re-Write 56 1719*
Stride Read 45 117
File Write 85 83
File Re-Write 85 82
File Read 35 95
File Re-Read 36 97
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

49 Comments

View All Comments

  • Essence_of_War - Friday, February 27, 2015 - link

    Ganesh,

    I understand why you used 120 GB SSDs for performance to try to capture the maximum throughput of the 10GbE, but I was confused to see that you stuck with those for things like raid expansion/rebuild etc.

    Was it a time constraint, or is this a change to the review platform in general? Is 8x small capacity SSDs in a RAID-5 an effective benchmark of RAID-5 rebuild times?
  • DanNeely - Friday, February 27, 2015 - link

    When Ganesh reviewed the DS1815 using HDDs for the rebuild it took almost 200 hours to do all the rebuild tests. (Probably longer due to delays between when one finished and the next was started.) That sort of test is prohibitively time consuming.

    http://www.anandtech.com/show/8693/synology-ds1815...
  • Essence_of_War - Friday, February 27, 2015 - link

    Yikes, when put that in context, that makes a lot more sense. I think we can reasonably extrapolate for larger capacities from the best-case scenario SSDs.
  • DanNeely - Friday, February 27, 2015 - link

    Repeating a request from a few months back: Can you put together something on how long/well COTS nas vendors provide software/OS updates for their products?
  • DigitalFreak - Friday, February 27, 2015 - link

    Synology is still releasing new OS updates for the 1812+ which is 3 years old.
  • DanNeely - Saturday, February 28, 2015 - link

    I poked around on their site since only 3 years surprised me; it looks like they're pushing full OS updates (at least by major version number, I can't tell about feature/app limits) as far back as the 2010 model with occasional security updates landing a few years farther back.

    That's long enough to make it to the upslope on the HDD failure bathtub curve, although I'd appreciate a bit longer because with consumer turnkey devices, I know a lot of the market won't be interested in a replacement before the old one dies.
  • M4stakilla - Friday, February 27, 2015 - link

    Currently I have a desktop with an LSI MegaRAID and 8 desktop HDDs.
    This is nice and comfortably fast (500MB/sec+) for storing all my media.

    Soon I will move from my appartement to a house and I will need to "split up" this desktop into a media center, a desktop pc and preferably some external storage system (my desktop uses quite some power being on 24/7).

    I'd like this data to remain available at a similar speed.

    I've been looking into a NAS, but either that is too expensive (like the above 1400$ NAS) or it is horribly slow (1gbit).

    Does anyone know any alternatives that can handle at least 500MB/sec and (less important, but still...) a reasonable access time?
    A small i3 / celeron desktop connected with something other than ethernet to the desktop? USB3.1 (max cable length?) Some version of eSATA? Something else? Would be nice if I could re-use my LSI megaRAID.

    Anyone have ideas?
  • SirGCal - Friday, February 27, 2015 - link

    Honestly, for playing media, you don't need speed. I have two 8 drive rigs myself, one with an LSI card and RAID 6 and one using ZFS RAIDZ2. Even hooked up to just a 1G network, it's still plenty fast to feed multiple computers live streaming BluRay content. Use 10G networks if you have the money or just chain multiple 1G's together within the system to enhance performance if you really need to. I haven't needed to yet and I host the entire house right now of multiple units. I can hit about 7 full tilt before the network would become an issue.

    If you're doing something else more direct that needs performance, you might consider something else then a standard network connection. But most people, using a 4-port with teaming 1G network PCIe card would be beyond overkill for the server.
  • M4stakilla - Sunday, March 1, 2015 - link

    I do not need this speed for playing media ofcourse ;)
    I need it for working with my media...

    And yes, I do need it for that... it is definately not a luxuary...
  • SirGCal - Sunday, March 1, 2015 - link

    Ahh, I work my media on my regular gaming rig and then just move the files over to the server when I'm finished with them. However, without using something specific like thunderbolt, your cheapest options (though not really CHEAP) might still be using two of the 4-port teamed 1G connections. That should give you ~ 400M throughput since I get about 110M with a single 1G line. Teaming loses a tiny bit but. You'd need it at both ends. Or get a small 10G network going. I got a switch from a company sale for < $200, cards are still expensive though, realistically your most expensive option. But a single connection gets you ~ 1100M throughput.

    I plan on getting the new Samsung SM951. Given 2G reads, something like 1.5G writes, that might be your cheapest option. Even if you need to get a capable M.2 PCIe riser card to use it. Then you just have transfer delays to/from the server. Unless 512G isn't enough cache space for work (good lord). Again, something like that might be your cheapest option if plausible.

Log in

Don't have an account? Sign up now