Synology RS10613xs+: 10GbE 10-bay Rackmount NAS Review
by Ganesh T S on December 26, 2013 3:11 AM EST- Posted in
- NAS
- Synology
- Enterprise
Introduction and Setup Impressions
Our enterprise NAS reviews have focused on Atom-based desktop form factor systems till now. These units have enough performance for a moderately sized workgroup and lack some of the essential features in the enterprise space such as acceptable performance with encrypted volumes. A number of readers have mailed in asking for more coverage of the NAS market straddling the high-end NAS and the NAS - SAN (storage area network) hybrid space. Models catering to this space come in the rackmount form factor and are based on more powerful processors such as the Intel Core series or the Xeon series.
Synology's flagship in this space over the last 12 months or so has been the RS10613xs+. Based on the Intel Xeon E3-1230 processor, this 2U rackmount system comes with twelve hot-swappable bays (two of which are dedicated for caching purposes) and 8GB of ECC RAM (expandable to 32 GB). Both SATA and SAS disks in 3.5" as well as 2.5" form factor are supported. In addition to the 10-bays, the unit has also got space for 2x 2.5" drives behind the display module. SSDs can be used in these bays to serve as a cache.
The specifications of the RS10613xs+ are as below:
Synology RS10613xs+ Specifications | |||
Processor | Intel Xeon E3-1230 (4C/8T, 3.2 GHz) | ||
RAM | 8 GB DDR3 ECC RAM (Upgradable to 32 GB) | ||
Drive Bays | 10x 3.5"/2.5" SATA / SAS 6 Gbps HDD / SSD + 2x 2.5" SSD Cache Bays | ||
Network Links | 4x 1 GbE + 2x 10 GbE (Add-on PCIe card) | ||
USB Slots | 4x USB 2.0 | ||
SAS Expansion Ports | 2x (compatible with RX1213sas) | ||
Expansion Slots | 2x (10 GbE card occupies one) | ||
VGA / Console | Reserved for Maintenance | ||
Full Specifications Link | Synology RS10613xs+ Hardware Specs |
Synology is well regarded in the SMB space for the stability as well as wealth of features offered on their units. The OS (DiskStation Manager - DSM) is very user-friendly. We have been following the evolution of DSM over the last couple of years. The RS10613xs+ is the first unit that we are reviewing with DSM 4.x, and we can say with conviction that DSM only keeps getting better.
Our only real complaint about DSM has been the lack of seamless storage pools with the capability to use a single disk across multiple RAID volumes (the type that Windows Storage Spaces provides). This is useful in scenarios with, say, four bay units, where the end user wants some data protected against a single disk failure and some other data protected against failure of two disks. This issue is not a problem with the RS10613xs+, since it has plenty of bays to create two separate volumes in this scenario. In any case, this is a situation more common in the home consumer segment rather than the enterprise segment towards which the RS10613xs+ is targeted.
The front panel has ten 3.5" drive bays arranged in three rows of four bays each. The rightmost column has a two-row / one-column wide LCM display panel with buttons to take care of administrative tasks. This panel can be pulled out to reveal the two caching SSD bays. On the rear side, we have redundant power supplies (integrated 1U PSUs of 400W each), a console and VGA port (not suggested for use by the end consumer), 4x USB 2.0 ports, 4x 1Gb Ethernet ports (all natively on the unit's motherboard) and two SAS-out expansion ports to connect up to 8 RX1213sas expansion units. There is also space for a half-height PCIe card, and it was outfitted with a dual 10 GbE SFP+ card in our review unit.
On the software side, not much has changed with respect to the UI in DSM 4.x compared to the older versions. There is definitely a more polished look and feel. For example, we have drag and drop support while configuring disks in different volumes. These types of minor improvements tend to contribute to a better user experience all around. The setup process is a breeze, with the unit's configuration page available on the network even in diskless mode. As the gallery below shows, the unit comes with a built-in OS which can be installed in case the unit / setup computer is not connected to the Internet / Synology's servers. A Quick Start Wizard prompts the user to create a volume to start using the unit.
An interesting aspect of the Storage Manager is the SSD Cache for boosting read performance. Automatic generation of file access statistics on a given volume helps in deciding the right amount of cache that might be beneficial to the system. Volumes are part of RAID groups. All volumes in a given RAID group are at the same RAID level. In addition, the storage manager also provides for configuration of iSCSU LUNs / targets and management of the disk drives (S.M.A.R.T and other similar disk-specific aspects).
RAID expansions / migrations as well as rebuilds are handled in the storage manager too. The other interesting aspect is the Network section. In the gallery above, one can see that it is possible to bond all the 6 network ports together in 802.3ad dynamic link aggregation mode. SSH access is available (as in older DSM versions). A CLI guide to work on the RAID groups / volumes in a SSH session would be a welcome complementary feature to the excellent web UI.
In the rest of this review, we will talk about our testbed setup, present results from our evaluation of single client performance with CIFS and NFS shares as well as iSCSI LUNs. Encryption support is also evaluated for CIFS shares. A section on performance with Linux clients will also be presented. Multi-client performance is evaluated using IOMeter on CIFS shares. In the final section we talk about power consumption, RAID rebuild durations and other miscellaneous aspects.
51 Comments
View All Comments
Gigaplex - Saturday, December 28, 2013 - link
No, you recover from backup. RAID is to increase availability in the enterprise, it is not a substitute for a backup.P_Dub_S - Thursday, December 26, 2013 - link
Please read that 3rd link and tell me if RAID 5 makes any sense with todays drive sizes and costs.Gunbuster - Thursday, December 26, 2013 - link
Re: that 3rd link. Who calls it resilvering? Sounds like what a crusty old unix sysadmin with no current hardware knowledge would call it.P_Dub_S - Thursday, December 26, 2013 - link
Whatever the name it doesn't really matter its the numbers that count and in TB drive sizes now a days RAID 5 makes zero sense.Kheb - Saturday, December 28, 2013 - link
No it doesnt. Not at all. First, you are taking into account only huge arrays used to store data and not to run applications (so basically only mechanical SATA, that is).Second, you are completeley ignoring costs (raid 5 or raid 6 vs raid 10). Third, you are assuming the raid 5 itself is not backed up or with some sort of software\hardware redundancy or tiering at lower levels (see SANs).So while I can agree that THEORETICALLY having raid 10 everywhere would indeed be safer, the costs (hdds + enclosures + controllers + backplanes) make this, and this time for real, have zero sense.
Ammaross - Thursday, December 26, 2013 - link
"Resilvering" is the ZFS term for rebuilding data on a volume. It's very much a current term still, but it does give us an insight into the current bias of the author, who apparently favors ZFS for his storage until something he proposes as better is golden.hydromike - Thursday, December 26, 2013 - link
How many times have you had to rebuild a RAID5 in your lifetime? I have over 100 times on over 10 major HARDWARE RAID vendors."And when you go to rebuild that huge RAID 5 array and another disk fails your screwed."
The other drive failing is a very small possibility in an enterprise environment that I was talking about, because of enterprise grade drives vs consumer. That is why most have either the raid taken offline for a much faster rebuild. Besides during that rebuild the RAID is still functional just degraded.
Also my point is lots of us still have hardware that is 2-5 years old that is still just working. The newest Arrays that I have setup as of late are 20 to 150 TB in size and we went with Freenas with ZFS which puts all other to shame. NetApp Storage appliances rebuild times are quite fast 6-12 hours for 40TB LUNS. It all depends upon the redundancy that you need. Saying that raid 5 needs to die is asinine. What if the data you are storing is all available in the public domain but have a local copy speeds up the data access rates. The rebuild is faster with a degraded LUN vs retrieving all of the data from the public domain again. There are many use cases for each RAID level just because one level does not fit YOUR uses it does not need to die!
P_Dub_S - Thursday, December 26, 2013 - link
So if you were to buy this NAS for a new implementation would you even consider throwing 10-12 disks in it and building a RAID 5 array? just asking. Even in your own post you state how you use Freenas with ZFS for your new arrays. RAID 5 is the dodo here let it go extinct.Ammaross - Thursday, December 26, 2013 - link
For all you know, he's running ZFS using raidz1 (RAID5 essentially). Also, saying RAID5 needs to die, one must then assume you also think RAID0 is beyond worthless, since it has NO redundancy? Obviously, you can (hopefully) cite the use-cases for RAID0. Your bias just prevents you from seeing the usefulness of RAID5.xxsk8er101xx - Friday, December 27, 2013 - link
It does happen though. I've had to rebuild 2 servers alone this year because of multiple drive failures. One server had 3 drives fail. But that's because of neglect. Us engineers only have so much time. Especially with the introduction to lean manufacturing.RAID 5 + Global spare though is usually pretty safe bet if it's a critical app server. Otherwise RAID 5 is perfectly fine.