Multi-Client Performance - CIFS

We put the Synology RS10613xs+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. 

We put the NAS through this evaluation in two modes. In one, we just teamed up two 1Gb ports and used the others as redundant links (with the 10G ports disconnected). In the second mode, we teamed everything together to provide a link theoretically capable of providing up to 24 Gbps. The graphs below present the results.

Synology RS10613xs+ Multi-Client CIFS Performance - 100% Sequential Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Max Throughput - 50% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Random 8K - 70% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Real Life - 65% Reads

Readers interested in the actual values can refer to our evaluation metrics table available here (with teaming of two of the 1 Gbps ports together and the others were left unconnected), here (with teaming of two of the 10 Gbps ports together and the others left unconnected) and here (with a 24 Gbps uplink - teaming all available network ports together).

The graphs for the QNAP TS-EC1279U-RP as well as the Synology DS1812+ are also presented as reference, but do remember that the QNAP unit had twelve drives in RAID-5 compared to ten here. The DS1812+ was also evaluated with hard drives in RAID-5 in its eight bays. In addition, none of the other units were equipped with 10 Gb links. With speeds reaching up to 800 MBps in RAID-5 for certain access patters,  the RS10613xs+ is, by far, the fastest NAS we have evaluated in our labs as yet. Synology claims speeds of up to 2000 MBps, and this is definitely possible in other RAID configurations with specific access patterns.

Single Client Performance - CIFS and NFS on Linux Miscellaneous Factors and Final Words
Comments Locked

51 Comments

View All Comments

  • iAPX - Thursday, December 26, 2013 - link

    2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?
    That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
  • Ammaross - Thursday, December 26, 2013 - link

    Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    That's because the PC only has a gigabit NIC. That's actually what you should expect.
  • BrentfromZulu - Thursday, December 26, 2013 - link

    For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)

    Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.

    Raid 1 and Raid 10 FTW!
  • xdrol - Thursday, December 26, 2013 - link

    From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.

    If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
  • BrentfromZulu - Friday, December 27, 2013 - link

    Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    You take a big hit on performance with RAID 6.
  • Ajaxnz - Thursday, December 26, 2013 - link

    I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.
    There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
    For reference, NFS performance is substantially better than CIFS or iSCSI.

    It copes fine with the 150 virtual machines that support a 20 person development team.

    So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
  • jasonelmore - Friday, December 27, 2013 - link

    well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.
  • Marquis42 - Friday, December 27, 2013 - link

    I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.

Log in

Don't have an account? Sign up now