Multi-Client Performance - CIFS on Windows

We put the Synology DS2015xs through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. The tool also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

Synology DS2015xs - 2x 10G Multi-Client CIFS Performance - 100% Sequential Reads

 

Synology DS2015xs - 2x 10G Multi-Client CIFS Performance - Max Throughput - 50% Reads

 

Synology DS2015xs - 2x 10G Multi-Client CIFS Performance - Random 8K - 70% Reads

 

Synology DS2015xs - 2x 10G Multi-Client CIFS Performance - Real Life - 65% Reads

We see that the sequential accesses get saturated around 700 MBps, similar to what we found in our evaluation of the unit as a DAS. In the Random 8K 70% Reads case, we see a sudden drop after more than five clients come into the mix - we believe it has to do with the smbd processes saturating the CPU cores completely. On the positive side, we find that the bandwidth numbers and response times are excellent across the board, better than all the other NAS units that we are comparing against.

Single Client Performance - CIFS and NFS on Linux Multi-Client iSCSI Evaluation
Comments Locked

49 Comments

View All Comments

  • Essence_of_War - Friday, February 27, 2015 - link

    Ganesh,

    I understand why you used 120 GB SSDs for performance to try to capture the maximum throughput of the 10GbE, but I was confused to see that you stuck with those for things like raid expansion/rebuild etc.

    Was it a time constraint, or is this a change to the review platform in general? Is 8x small capacity SSDs in a RAID-5 an effective benchmark of RAID-5 rebuild times?
  • DanNeely - Friday, February 27, 2015 - link

    When Ganesh reviewed the DS1815 using HDDs for the rebuild it took almost 200 hours to do all the rebuild tests. (Probably longer due to delays between when one finished and the next was started.) That sort of test is prohibitively time consuming.

    http://www.anandtech.com/show/8693/synology-ds1815...
  • Essence_of_War - Friday, February 27, 2015 - link

    Yikes, when put that in context, that makes a lot more sense. I think we can reasonably extrapolate for larger capacities from the best-case scenario SSDs.
  • DanNeely - Friday, February 27, 2015 - link

    Repeating a request from a few months back: Can you put together something on how long/well COTS nas vendors provide software/OS updates for their products?
  • DigitalFreak - Friday, February 27, 2015 - link

    Synology is still releasing new OS updates for the 1812+ which is 3 years old.
  • DanNeely - Saturday, February 28, 2015 - link

    I poked around on their site since only 3 years surprised me; it looks like they're pushing full OS updates (at least by major version number, I can't tell about feature/app limits) as far back as the 2010 model with occasional security updates landing a few years farther back.

    That's long enough to make it to the upslope on the HDD failure bathtub curve, although I'd appreciate a bit longer because with consumer turnkey devices, I know a lot of the market won't be interested in a replacement before the old one dies.
  • M4stakilla - Friday, February 27, 2015 - link

    Currently I have a desktop with an LSI MegaRAID and 8 desktop HDDs.
    This is nice and comfortably fast (500MB/sec+) for storing all my media.

    Soon I will move from my appartement to a house and I will need to "split up" this desktop into a media center, a desktop pc and preferably some external storage system (my desktop uses quite some power being on 24/7).

    I'd like this data to remain available at a similar speed.

    I've been looking into a NAS, but either that is too expensive (like the above 1400$ NAS) or it is horribly slow (1gbit).

    Does anyone know any alternatives that can handle at least 500MB/sec and (less important, but still...) a reasonable access time?
    A small i3 / celeron desktop connected with something other than ethernet to the desktop? USB3.1 (max cable length?) Some version of eSATA? Something else? Would be nice if I could re-use my LSI megaRAID.

    Anyone have ideas?
  • SirGCal - Friday, February 27, 2015 - link

    Honestly, for playing media, you don't need speed. I have two 8 drive rigs myself, one with an LSI card and RAID 6 and one using ZFS RAIDZ2. Even hooked up to just a 1G network, it's still plenty fast to feed multiple computers live streaming BluRay content. Use 10G networks if you have the money or just chain multiple 1G's together within the system to enhance performance if you really need to. I haven't needed to yet and I host the entire house right now of multiple units. I can hit about 7 full tilt before the network would become an issue.

    If you're doing something else more direct that needs performance, you might consider something else then a standard network connection. But most people, using a 4-port with teaming 1G network PCIe card would be beyond overkill for the server.
  • M4stakilla - Sunday, March 1, 2015 - link

    I do not need this speed for playing media ofcourse ;)
    I need it for working with my media...

    And yes, I do need it for that... it is definately not a luxuary...
  • SirGCal - Sunday, March 1, 2015 - link

    Ahh, I work my media on my regular gaming rig and then just move the files over to the server when I'm finished with them. However, without using something specific like thunderbolt, your cheapest options (though not really CHEAP) might still be using two of the 4-port teamed 1G connections. That should give you ~ 400M throughput since I get about 110M with a single 1G line. Teaming loses a tiny bit but. You'd need it at both ends. Or get a small 10G network going. I got a switch from a company sale for < $200, cards are still expensive though, realistically your most expensive option. But a single connection gets you ~ 1100M throughput.

    I plan on getting the new Samsung SM951. Given 2G reads, something like 1.5G writes, that might be your cheapest option. Even if you need to get a capable M.2 PCIe riser card to use it. Then you just have transfer delays to/from the server. Unless 512G isn't enough cache space for work (good lord). Again, something like that might be your cheapest option if plausible.

Log in

Don't have an account? Sign up now