Highpoint Updates NVMe RAID Cards For PCIe 4.0, Up To 8 M.2 SSDs
by Billy Tallis on November 12, 2020 5:00 PM ESTHighPoint Technologies has updated their NVMe RAID solutions with PCIe 4.0 support and adapter cards supporting up to eight NVMe drives. The new HighPoint SSD7500 series adapter cards are the PCIe 4.0 successors to the SSD7100 and SSD7200 series products. These cards are primarily aimed at the workstation market, as the server market has largely moved on from traditional RAID arrays, especially when using NVMe SSDs for which traditional hardware RAID controllers do not exist. HighPoint's PCIe gen4 lineup currently consists of cards with four or eight M.2 slots, and one with eight SFF-8654 ports for connecting to U.2 SSDs. They also recently added an 8x M.2 card to their PCIe gen3 family, with the Mac Pro specifically in mind as a popular workstation platform that won't be getting PCIe gen4 support particularly soon.
HighPoint's NVMe RAID is implemented as software RAID bundled with adapter cards featuring Broadcom/PLX PCIe switches. HighPoint provides RAID drivers and management utilities for Windows, macOS and Linux. Competing software NVMe RAID solutions like Intel RST or VROC achieve boot support by bundling a UEFI driver in with the rest of the motherboard's firmware. Highpoint's recent 4-drive cards include their UEFI driver on an Option ROM to provide boot support for Windows and Linux systems, and all of their cards allow booting from an SSD that is not part of a RAID array. HighPoint's NVMe RAID supports RAID 0/1/10 modes, but does not implement any parity RAID options.
Highpoint has also improved the cooling on their RAID cards. Putting several high-performance M.2 SSDs and a power-hungry PCIe switch on one card generally requires active cooling, and HighPoint's early NVMe RAID cards could be pretty noisy. Their newer heatsink design lets the cards benefit from airflow provided by case fans instead of just the card's own fan (two fans, for the 8x M.2 cards), and the fans they are now using are a bit larger and quieter.
In the PCIe 2.0 era, PLX PCIe switches were common on high-end consumer motherboards to provide multi-GPU connectivity. In the PCIe 3.0 era, the switches were priced for the server market and almost completely disappeared from consumer/enthusiast products. In the PCIe 4.0 era, it looks like prices have gone up again. Even though these cards are the best way to get lots of M.2 PCIe SSDs connected to mainstream consumer platforms that don't support the PCIe port bifurcation required by passive quad M.2 riser boards, the pricing makes it very unlikely that they'll ever see much use in systems less high-end than a Threadripper or Xeon workstation. However, Highpoint has actually tested on the AMD X570 platform and achieved 20GB/s throughput using Phison E16 SSDs, and almost 28GB/s on an AMD EPYC platform (out of a theoretical limit of 31.5 GB/s). These numbers should improve a bit as faster, lower-latency PCIe 4.0 SSDs become available.
HighPoint NVMe RAID Adapters | ||||||||
Model | SSD7505 | SSD7540 | SSD7580 | SSD7140 | ||||
Host Interface | PCIe 4.0 x16 | PCIe 3.0 x16 | ||||||
Downstream Ports | 4x M.2 | 8x M.2 | 8x U.2 | 8x M.2 | ||||
MSRP | $599 | $999 | $999 | $699 |
Now that consumer M.2 NVMe SSDs are available in 4TB and 8TB capacities, these RAID products can accommodate up to 64TB of storage at a much lower price per TB than using enterprise SSDs, and without requiring a system with U.2 drive bays. For tasks like audio and video editing workstations, that's an impressive amount of local storage capacity and throughput. The lower write endurance of consumer SSDs (even QLC drives) is generally less of a concern for workstations than for servers that are busy around the clock, and for many use cases having a capacity of tens of TB means the array as a whole has plenty of write endurance even if the individual drives have low DWPD ratings. Using consumer SSDs also means that peak performance is higher than for many enterprise SSDs, and a large RAID-0 array of consumer SSDs will have a total SLC cache size in the TB range.
The SSD7140 (8x M.2, PCIe gen3) and the SSD7505 (4x M.2, PCIe gen4) have already hit the market and the SSD7540 (8x M.2, PCIe gen4) is shipping this month. The SSD7580 (8x U.2, PCIe gen4) is planned to be available next month.
31 Comments
View All Comments
msroadkill612 - Sunday, November 15, 2020 - link
There is a big difference between cooling an nvme lying flat on the mobo m.2 port and those on a vertical card. I would be confident I could improvise something.msroadkill612 - Sunday, November 15, 2020 - link
Ah yes but....The Asus quad nvme adapter card also works in an 8 lane slot hosting 2x nvme.
Many former TR workstation users are finding they can just squeeze into a far cheaper 12 or 16 core AM4 - at times even prefer it.
W/ PCIE 4 & a suitable bifurcated mobo, an 8 lane PCIE 4 gpu retains the bandwidth of a 16 lane pcie 3 GPU, yet frees 8 lanes for use by the 2nd 8 lane slot.
This could be pushed to 3x nvme raid if including the native am4 nvme by ~booting on a sata ssd.
A triple nvme raid, 64GB & 16 cores is quite a beast for a relative pittance.
quorm - Monday, November 16, 2020 - link
Heck, the asrock trx40 taichi mobo comes with an expansion card that handles four pcie4 m.2 drives.TootsieNootan - Friday, November 13, 2020 - link
I have a Highpoint 7505 with 4 Samsung 980 Pros in it. Read is 24k Write is 18.8K. Its my boot drive so my system and all my apps load super fast when I'm not waiting for the CPU to catch up.TootsieNootan - Friday, November 13, 2020 - link
Forgot to mention. I did run into one problem with the Highpoint 7505 if your are using it as a boot drive. which I assume will effect all models is that VMWare complains about EFI drives and says in order to run a virtual machine I have to give it 96 gigs for ram.TomWomack - Saturday, November 14, 2020 - link
Does anyone know whether these can be used as JBOD (insert eight drives, get /dev/nvme0n1 through /dev/nvme0n8) ? It feels there ought to be scope for a Chinese manufacturer to provide much, much cheaper PCIe v4 switch chips; yes, the interface electronics are hard to design, but they're exactly the sort of thing that the Chinese government seems willing to subsidise in order to get indigenous provision.Billy Tallis - Monday, November 16, 2020 - link
Yes, you can use these in JBOD mode. It's software RAID, so without their RAID drivers you just have a PCIe switch routing lanes to eight NVMe drives. (Though it would be /dev/nvme0n1 through /dev/nvme7n1, since each drive is a different NVMe controller rather than a different namespace on the same controller.)ASMedia does make PCIe gen3 switches, but only up to 24 lanes (so x8 up, x4+x4+x4+x4 downstream ports). Microchip/Microsemi and Broadcom/PLX are the only two options for gen4 switches, or large gen3 switches.
carcakes - Monday, November 16, 2020 - link
Wonderful! 8X! They used to ditch Store MI for something new...havent seen any new tool since that time..carcakes - Monday, November 16, 2020 - link
Snaps :// https://highpoint-tech.com/USA_new/series-ssd7500-... <img src="https://highpoint-tech.com/USA_new/images/ima20042...abufrejoval - Wednesday, November 18, 2020 - link
The move by Avago (now Broadcom) grab all PCIe switch IP and raise prices through the roof, evidently paid off for them.I'm not even sure they sell such a lot of them in servers, but in storage appliances.
But since the IOD in Zen is pretty much a PCIe switch yet comes at a far lower price, I wonder if one couldn't simply use X570 chips instead.
AMD and Intel have the IP, they could both certainly crash the Broadcom party, right?