Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Synology DS2015xs - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 86 82
Re-Write 85 82
Read 50 120
Re-Read 50 122
Random Read 33 70
Random Write 80 82
Backward Read 32 58
Record Re-Write 56 1719*
Stride Read 45 117
File Write 85 83
File Re-Write 85 82
File Read 35 95
File Re-Read 36 97
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

49 Comments

View All Comments

  • Dug - Saturday, February 28, 2015 - link

    Actually RAID 10 is used far more than RAID 5 or 6. With RAID 5 actually not even being listed as an option with Dell anymore.
    The random write IOPS loss from RAID6 is not worth it vs RAID10.
    Rebuild times are 300% faster with RAID10.

    The marginal cost of adding another pair of drives to increase the RAID10 array would be easier than trying to increase IO performance later on a RAID6 array.

    But then again, this is mostly for combining os, apps, and storage (VM). For just storage, it may not make any difference depending on the how many users or application type.
  • SirGCal - Sunday, March 1, 2015 - link

    That's missing the point entirely. If you lose a drive from each subset of RAID10, you're done. It's basically a RAID 0 array, mirrored to another one (RAID 1). You could lose one entire array and be fine, but lose one disk out of the working array and you're finished. The point of RAID 6 is you can lose any 2 disks and still operate. So most likely scenario is you lose one, replace it and the rebuild is going and another fails.

    RAID0 is pure performance, RAID1 is drive for drive mirroring, RAID10 is a combination of the two, RAID 5 offers one drive (any) redundancy. Not as useful anymore. RAID 6 offers two. The other factor is you lose less storage room with RAID 6 then RAID 0. More drive security, less storage loss. More overhead sure but that's still nothing for the small business or home user's media storage. So, assuming 4TB drives x 8 drives... RAID 6 = 24TB or usable storage space (well, more like 22 but we're doing simple math here). RAID 10 = 16TB. And I'm all about huge storage with as much security as reasonably possible.

    And who gives a crap what Dell thinks anyhow? We never had more trouble with our hardware then the few years the company switched to them. Then promptly switched away a few years after.
  • DigitalFreak - Monday, March 2, 2015 - link

    You are confusing RAID 0+1 with RAID 10 (or 1+0). http://www.thegeekstuff.com/2011/10/raid10-vs-raid...
    0+1 = Striped then mirrored
    1+0 = Mirrored then striped
  • Jaybus - Monday, March 2, 2015 - link

    RAID 10 is not exactly 1+0, at least not in the Linux kernel implementation. In any case, RAID 10 can have more than 2 copies of every chunk, depending on the number of available drives. It is a tradeoff between redundancy and disk usage. With 2 copies, every chunk is safe from a single disk failure and the array size is half of the total drive capacity. With 3, every chunk is safe from two-disk failure, but the array size is down to 1/3 of the total capacity. It is not correct to state that RAID 10 cannot withstand two-drive failures. Also, since not all chunks are on all disks, it is also possible that a RAID 10 survives a multi-disk failure. It is just not guaranteed that it will unless copies > 2. A positive for RAID 10 is that a degraded RAID 10 generally has no corresponding performance degradation.
  • questionlp - Friday, February 27, 2015 - link

    There's the FreeNAS Mini that can be ordered via Amazon. I think you can order it sans drives or pre-populated with four drives. I've been considering getting one, but I don't know how well they perform vs a Syn or other COTS NAS boxen.
  • usernametaken76 - Friday, February 27, 2015 - link

    iXsystems sells a few different lines of ZFS capable hardware. The FreeNAS Mini which was mentioned wouldn't compete with this unit as it is more geared towards the home user. I see this product as more SOHO oriented than consumer level kit. The TrueNAS products sold by iXsystems are much more expensive than the consumer level gear, but you get what you pay for (backed by expert FreeBSD developers, FreeNAS developers, quality support.)
  • zata404 - Sunday, March 1, 2015 - link

    The short answer is no.
  • bleppard - Monday, March 2, 2015 - link

    Infortrend has a line of NAS that use ZFS. The EonNAS Pro 850 most closely lines up with the NAS under review in this article. Infortrend's NAS boxes seem to have some pretty advanced features. I would love to have Anandtech review them.
  • DanNeely - Monday, March 2, 2015 - link

    I'd be more interested in seeing a review of the 210/510 because they more closely approximate mainstream SOHO NASes in specifications; although at $500/$700 they're still a major step up in price over midrange QNap/Synology units.

    It's not immediately clear from their documentation, I'm also curious if they're running a stock version of OpenSolaris that allows easy patching from Oracle's repositories, or have customized it enough to make customers dependent on them for major OS updates.
  • DanNeely - Monday, March 2, 2015 - link

    Also of interest in those models would be performance scaling to more modest hardware, the x10 units only have baytrail based processors.

Log in

Don't have an account? Sign up now