Multi-Client Performance - CIFS

We put the Synology RS10613xs+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. 

We put the NAS through this evaluation in two modes. In one, we just teamed up two 1Gb ports and used the others as redundant links (with the 10G ports disconnected). In the second mode, we teamed everything together to provide a link theoretically capable of providing up to 24 Gbps. The graphs below present the results.

Synology RS10613xs+ Multi-Client CIFS Performance - 100% Sequential Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Max Throughput - 50% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Random 8K - 70% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Real Life - 65% Reads

Readers interested in the actual values can refer to our evaluation metrics table available here (with teaming of two of the 1 Gbps ports together and the others were left unconnected), here (with teaming of two of the 10 Gbps ports together and the others left unconnected) and here (with a 24 Gbps uplink - teaming all available network ports together).

The graphs for the QNAP TS-EC1279U-RP as well as the Synology DS1812+ are also presented as reference, but do remember that the QNAP unit had twelve drives in RAID-5 compared to ten here. The DS1812+ was also evaluated with hard drives in RAID-5 in its eight bays. In addition, none of the other units were equipped with 10 Gb links. With speeds reaching up to 800 MBps in RAID-5 for certain access patters,  the RS10613xs+ is, by far, the fastest NAS we have evaluated in our labs as yet. Synology claims speeds of up to 2000 MBps, and this is definitely possible in other RAID configurations with specific access patterns.

Single Client Performance - CIFS and NFS on Linux Miscellaneous Factors and Final Words
Comments Locked

51 Comments

View All Comments

  • Gigaplex - Saturday, December 28, 2013 - link

    No, you recover from backup. RAID is to increase availability in the enterprise, it is not a substitute for a backup.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Please read that 3rd link and tell me if RAID 5 makes any sense with todays drive sizes and costs.
  • Gunbuster - Thursday, December 26, 2013 - link

    Re: that 3rd link. Who calls it resilvering? Sounds like what a crusty old unix sysadmin with no current hardware knowledge would call it.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Whatever the name it doesn't really matter its the numbers that count and in TB drive sizes now a days RAID 5 makes zero sense.
  • Kheb - Saturday, December 28, 2013 - link

    No it doesnt. Not at all. First, you are taking into account only huge arrays used to store data and not to run applications (so basically only mechanical SATA, that is).Second, you are completeley ignoring costs (raid 5 or raid 6 vs raid 10). Third, you are assuming the raid 5 itself is not backed up or with some sort of software\hardware redundancy or tiering at lower levels (see SANs).

    So while I can agree that THEORETICALLY having raid 10 everywhere would indeed be safer, the costs (hdds + enclosures + controllers + backplanes) make this, and this time for real, have zero sense.
  • Ammaross - Thursday, December 26, 2013 - link

    "Resilvering" is the ZFS term for rebuilding data on a volume. It's very much a current term still, but it does give us an insight into the current bias of the author, who apparently favors ZFS for his storage until something he proposes as better is golden.
  • hydromike - Thursday, December 26, 2013 - link

    How many times have you had to rebuild a RAID5 in your lifetime? I have over 100 times on over 10 major HARDWARE RAID vendors.

    "And when you go to rebuild that huge RAID 5 array and another disk fails your screwed."

    The other drive failing is a very small possibility in an enterprise environment that I was talking about, because of enterprise grade drives vs consumer. That is why most have either the raid taken offline for a much faster rebuild. Besides during that rebuild the RAID is still functional just degraded.

    Also my point is lots of us still have hardware that is 2-5 years old that is still just working. The newest Arrays that I have setup as of late are 20 to 150 TB in size and we went with Freenas with ZFS which puts all other to shame. NetApp Storage appliances rebuild times are quite fast 6-12 hours for 40TB LUNS. It all depends upon the redundancy that you need. Saying that raid 5 needs to die is asinine. What if the data you are storing is all available in the public domain but have a local copy speeds up the data access rates. The rebuild is faster with a degraded LUN vs retrieving all of the data from the public domain again. There are many use cases for each RAID level just because one level does not fit YOUR uses it does not need to die!
  • P_Dub_S - Thursday, December 26, 2013 - link

    So if you were to buy this NAS for a new implementation would you even consider throwing 10-12 disks in it and building a RAID 5 array? just asking. Even in your own post you state how you use Freenas with ZFS for your new arrays. RAID 5 is the dodo here let it go extinct.
  • Ammaross - Thursday, December 26, 2013 - link

    For all you know, he's running ZFS using raidz1 (RAID5 essentially). Also, saying RAID5 needs to die, one must then assume you also think RAID0 is beyond worthless, since it has NO redundancy? Obviously, you can (hopefully) cite the use-cases for RAID0. Your bias just prevents you from seeing the usefulness of RAID5.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    It does happen though. I've had to rebuild 2 servers alone this year because of multiple drive failures. One server had 3 drives fail. But that's because of neglect. Us engineers only have so much time. Especially with the introduction to lean manufacturing.

    RAID 5 + Global spare though is usually pretty safe bet if it's a critical app server. Otherwise RAID 5 is perfectly fine.

Log in

Don't have an account? Sign up now