The Competition

We don't have a very large collection of enterprise SSDs, but we have a handful of other recent high-end datacenter drives to compare the PBlaze5 C916 against. Most of these drives were included in our recent roundup of enterprise SSDs. The PBlaze5 C900 is the immediate predecessor to the C916, and the D900 is the U.2 version. The Micron 9100 MAX is an older drive that uses the same Microsemi controller but planar MLC NAND, so it represents the high-end from two generations back.

From Intel we have the top of the line Optane DC P4800X, and the TLC-based P4510 8TB. The P4610 would be a closer match for the C916 as both are rated for 3 DWPD, while the P4510 is better suited for comparison against the PBlaze5 C910 in the 1 DWPD segment. However, the P4510 is still based on the same 64L IMFT TLC that the PBlaze5 C916 uses, so aside from steady state write speeds the performance differences should be mostly due to the controller differences.

The two Samsung drives are both based around the 8-channel Phoenix controller that is also used in their consumer NVMe product line. The 983 DCT occupies a decidedly lower market segment than the Memblaze drives, but the 983 ZET is a high-end product with Samsung's specialized low-latency Z-NAND flash memory. Samsung's PM1725b is their current closest competitor to the PBlaze5 C916, with a PCIe x8 interface and 3 DWPD rating. However, there's no retail version of the PM1725b so samples are harder to come by.

Test System

Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.19.8
fio version 3.12
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet.

The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, since the rack doesn't include exhaust fans of its own. The rack is currently installed in an unheated attic with ambient temperatures that provide a reasonable approximation of a well-cooled datacenter.

The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.

Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes, the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.

Our drive power measurements are conducted with a Quarch XLC Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.

Introduction Performance at Queue Depth 1
Comments Locked

13 Comments

View All Comments

  • Samus - Wednesday, March 13, 2019 - link

    That. Capacitor.
  • Billy Tallis - Wednesday, March 13, 2019 - link

    Yes, sometimes "power loss protection capacitor" doesn't need to be plural. 1800µF 35V Nichicon, BTW, since my photos didn't catch the label.
  • willis936 - Wednesday, March 13, 2019 - link

    That’s 3.78W for one minute if they’re running at the maximum voltage rating (which they shouldn’t and probably don’t), if anyone’s curious.
  • DominionSeraph - Wednesday, March 13, 2019 - link

    It's cute, isn't it?

    https://www.amazon.com/BOSS-Audio-CPBK2-2-Capacito...
  • takeshi7 - Wednesday, March 13, 2019 - link

    I wish companies made consumer PCIe x8 SSDs. It would be good since many motherboards can split the PCIe lanes x8/x8 and SLI is falling out of favor anyways.
  • surt - Wednesday, March 13, 2019 - link

    I bet 90% of motherboard buyers would prefer 2 x16 slots vs any other configuration so they can run 1 GPU and 1 very fast SSD. I really don't understand why the market hasn't moved in this direction.
  • MFinn3333 - Wednesday, March 13, 2019 - link

    Because SSD's have a hard time saturating 4x PCIe slots, 16x would just take up space for no real purpose.
  • Midwayman - Wednesday, March 13, 2019 - link

    Maybe, but it sucks that your GPU gets moved to 8x. 16/4 would be an easier split to live with.
  • bananaforscale - Thursday, March 14, 2019 - link

    Not really, GPUs are typically bottlenecked by local memory (VRAM), not PCIe.
  • Opencg - Wednesday, March 13, 2019 - link

    performance would not be very noticeable. and even in the few cases it would be, it would require more expensive cpus and mobos thus mitigating the attractiveness to very few consumers. and fewer consumers means even higher prices. we will get higher throughput but its much more likely with pci 4.0/5.0 than 2 16x

Log in

Don't have an account? Sign up now