Synology RS10613xs+: 10GbE 10-bay Rackmount NAS Review
by Ganesh T S on December 26, 2013 3:11 AM EST- Posted in
- NAS
- Synology
- Enterprise
Introduction and Setup Impressions
Our enterprise NAS reviews have focused on Atom-based desktop form factor systems till now. These units have enough performance for a moderately sized workgroup and lack some of the essential features in the enterprise space such as acceptable performance with encrypted volumes. A number of readers have mailed in asking for more coverage of the NAS market straddling the high-end NAS and the NAS - SAN (storage area network) hybrid space. Models catering to this space come in the rackmount form factor and are based on more powerful processors such as the Intel Core series or the Xeon series.
Synology's flagship in this space over the last 12 months or so has been the RS10613xs+. Based on the Intel Xeon E3-1230 processor, this 2U rackmount system comes with twelve hot-swappable bays (two of which are dedicated for caching purposes) and 8GB of ECC RAM (expandable to 32 GB). Both SATA and SAS disks in 3.5" as well as 2.5" form factor are supported. In addition to the 10-bays, the unit has also got space for 2x 2.5" drives behind the display module. SSDs can be used in these bays to serve as a cache.
The specifications of the RS10613xs+ are as below:
Synology RS10613xs+ Specifications | |||
Processor | Intel Xeon E3-1230 (4C/8T, 3.2 GHz) | ||
RAM | 8 GB DDR3 ECC RAM (Upgradable to 32 GB) | ||
Drive Bays | 10x 3.5"/2.5" SATA / SAS 6 Gbps HDD / SSD + 2x 2.5" SSD Cache Bays | ||
Network Links | 4x 1 GbE + 2x 10 GbE (Add-on PCIe card) | ||
USB Slots | 4x USB 2.0 | ||
SAS Expansion Ports | 2x (compatible with RX1213sas) | ||
Expansion Slots | 2x (10 GbE card occupies one) | ||
VGA / Console | Reserved for Maintenance | ||
Full Specifications Link | Synology RS10613xs+ Hardware Specs |
Synology is well regarded in the SMB space for the stability as well as wealth of features offered on their units. The OS (DiskStation Manager - DSM) is very user-friendly. We have been following the evolution of DSM over the last couple of years. The RS10613xs+ is the first unit that we are reviewing with DSM 4.x, and we can say with conviction that DSM only keeps getting better.
Our only real complaint about DSM has been the lack of seamless storage pools with the capability to use a single disk across multiple RAID volumes (the type that Windows Storage Spaces provides). This is useful in scenarios with, say, four bay units, where the end user wants some data protected against a single disk failure and some other data protected against failure of two disks. This issue is not a problem with the RS10613xs+, since it has plenty of bays to create two separate volumes in this scenario. In any case, this is a situation more common in the home consumer segment rather than the enterprise segment towards which the RS10613xs+ is targeted.
The front panel has ten 3.5" drive bays arranged in three rows of four bays each. The rightmost column has a two-row / one-column wide LCM display panel with buttons to take care of administrative tasks. This panel can be pulled out to reveal the two caching SSD bays. On the rear side, we have redundant power supplies (integrated 1U PSUs of 400W each), a console and VGA port (not suggested for use by the end consumer), 4x USB 2.0 ports, 4x 1Gb Ethernet ports (all natively on the unit's motherboard) and two SAS-out expansion ports to connect up to 8 RX1213sas expansion units. There is also space for a half-height PCIe card, and it was outfitted with a dual 10 GbE SFP+ card in our review unit.
On the software side, not much has changed with respect to the UI in DSM 4.x compared to the older versions. There is definitely a more polished look and feel. For example, we have drag and drop support while configuring disks in different volumes. These types of minor improvements tend to contribute to a better user experience all around. The setup process is a breeze, with the unit's configuration page available on the network even in diskless mode. As the gallery below shows, the unit comes with a built-in OS which can be installed in case the unit / setup computer is not connected to the Internet / Synology's servers. A Quick Start Wizard prompts the user to create a volume to start using the unit.
An interesting aspect of the Storage Manager is the SSD Cache for boosting read performance. Automatic generation of file access statistics on a given volume helps in deciding the right amount of cache that might be beneficial to the system. Volumes are part of RAID groups. All volumes in a given RAID group are at the same RAID level. In addition, the storage manager also provides for configuration of iSCSU LUNs / targets and management of the disk drives (S.M.A.R.T and other similar disk-specific aspects).
RAID expansions / migrations as well as rebuilds are handled in the storage manager too. The other interesting aspect is the Network section. In the gallery above, one can see that it is possible to bond all the 6 network ports together in 802.3ad dynamic link aggregation mode. SSH access is available (as in older DSM versions). A CLI guide to work on the RAID groups / volumes in a SSH session would be a welcome complementary feature to the excellent web UI.
In the rest of this review, we will talk about our testbed setup, present results from our evaluation of single client performance with CIFS and NFS shares as well as iSCSI LUNs. Encryption support is also evaluated for CIFS shares. A section on performance with Linux clients will also be presented. Multi-client performance is evaluated using IOMeter on CIFS shares. In the final section we talk about power consumption, RAID rebuild durations and other miscellaneous aspects.
51 Comments
View All Comments
iAPX - Thursday, December 26, 2013 - link
2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
Ammaross - Thursday, December 26, 2013 - link
Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.xxsk8er101xx - Friday, December 27, 2013 - link
That's because the PC only has a gigabit NIC. That's actually what you should expect.BrentfromZulu - Thursday, December 26, 2013 - link
For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.
Raid 1 and Raid 10 FTW!
xdrol - Thursday, December 26, 2013 - link
From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
BrentfromZulu - Friday, December 27, 2013 - link
Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.xxsk8er101xx - Friday, December 27, 2013 - link
You take a big hit on performance with RAID 6.Ajaxnz - Thursday, December 26, 2013 - link
I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
For reference, NFS performance is substantially better than CIFS or iSCSI.
It copes fine with the 150 virtual machines that support a 20 person development team.
So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
jasonelmore - Friday, December 27, 2013 - link
well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.Marquis42 - Friday, December 27, 2013 - link
I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.