Highpoint Updates NVMe RAID Cards For PCIe 4.0, Up To 8 M.2 SSDs
by Billy Tallis on November 12, 2020 5:00 PM ESTHighPoint Technologies has updated their NVMe RAID solutions with PCIe 4.0 support and adapter cards supporting up to eight NVMe drives. The new HighPoint SSD7500 series adapter cards are the PCIe 4.0 successors to the SSD7100 and SSD7200 series products. These cards are primarily aimed at the workstation market, as the server market has largely moved on from traditional RAID arrays, especially when using NVMe SSDs for which traditional hardware RAID controllers do not exist. HighPoint's PCIe gen4 lineup currently consists of cards with four or eight M.2 slots, and one with eight SFF-8654 ports for connecting to U.2 SSDs. They also recently added an 8x M.2 card to their PCIe gen3 family, with the Mac Pro specifically in mind as a popular workstation platform that won't be getting PCIe gen4 support particularly soon.
HighPoint's NVMe RAID is implemented as software RAID bundled with adapter cards featuring Broadcom/PLX PCIe switches. HighPoint provides RAID drivers and management utilities for Windows, macOS and Linux. Competing software NVMe RAID solutions like Intel RST or VROC achieve boot support by bundling a UEFI driver in with the rest of the motherboard's firmware. Highpoint's recent 4-drive cards include their UEFI driver on an Option ROM to provide boot support for Windows and Linux systems, and all of their cards allow booting from an SSD that is not part of a RAID array. HighPoint's NVMe RAID supports RAID 0/1/10 modes, but does not implement any parity RAID options.
Highpoint has also improved the cooling on their RAID cards. Putting several high-performance M.2 SSDs and a power-hungry PCIe switch on one card generally requires active cooling, and HighPoint's early NVMe RAID cards could be pretty noisy. Their newer heatsink design lets the cards benefit from airflow provided by case fans instead of just the card's own fan (two fans, for the 8x M.2 cards), and the fans they are now using are a bit larger and quieter.
In the PCIe 2.0 era, PLX PCIe switches were common on high-end consumer motherboards to provide multi-GPU connectivity. In the PCIe 3.0 era, the switches were priced for the server market and almost completely disappeared from consumer/enthusiast products. In the PCIe 4.0 era, it looks like prices have gone up again. Even though these cards are the best way to get lots of M.2 PCIe SSDs connected to mainstream consumer platforms that don't support the PCIe port bifurcation required by passive quad M.2 riser boards, the pricing makes it very unlikely that they'll ever see much use in systems less high-end than a Threadripper or Xeon workstation. However, Highpoint has actually tested on the AMD X570 platform and achieved 20GB/s throughput using Phison E16 SSDs, and almost 28GB/s on an AMD EPYC platform (out of a theoretical limit of 31.5 GB/s). These numbers should improve a bit as faster, lower-latency PCIe 4.0 SSDs become available.
HighPoint NVMe RAID Adapters | ||||||||
Model | SSD7505 | SSD7540 | SSD7580 | SSD7140 | ||||
Host Interface | PCIe 4.0 x16 | PCIe 3.0 x16 | ||||||
Downstream Ports | 4x M.2 | 8x M.2 | 8x U.2 | 8x M.2 | ||||
MSRP | $599 | $999 | $999 | $699 |
Now that consumer M.2 NVMe SSDs are available in 4TB and 8TB capacities, these RAID products can accommodate up to 64TB of storage at a much lower price per TB than using enterprise SSDs, and without requiring a system with U.2 drive bays. For tasks like audio and video editing workstations, that's an impressive amount of local storage capacity and throughput. The lower write endurance of consumer SSDs (even QLC drives) is generally less of a concern for workstations than for servers that are busy around the clock, and for many use cases having a capacity of tens of TB means the array as a whole has plenty of write endurance even if the individual drives have low DWPD ratings. Using consumer SSDs also means that peak performance is higher than for many enterprise SSDs, and a large RAID-0 array of consumer SSDs will have a total SLC cache size in the TB range.
The SSD7140 (8x M.2, PCIe gen3) and the SSD7505 (4x M.2, PCIe gen4) have already hit the market and the SSD7540 (8x M.2, PCIe gen4) is shipping this month. The SSD7580 (8x U.2, PCIe gen4) is planned to be available next month.
31 Comments
View All Comments
rpg1966 - Thursday, November 12, 2020 - link
Extraordinary price for such a simple board. Yes yes, I know.Billy Tallis - Thursday, November 12, 2020 - link
Simple board, crazy expensive switch chip, and bundled software that really ought to be standard OS functionality.lightningz71 - Friday, November 13, 2020 - link
Excluding boot capabilities, I know for a fact that Windows 10 and Linux have internal mechanisms for building RAID arrays of many different types. MS storage spaces, who's full features are accessible from powershell in windows 10, can do many things very well. Linux has ZFS and several other tool sets to do the same things.Gigaplex - Saturday, November 14, 2020 - link
The Windows RAID support is junk and not worth using in mission critical systems. And the performance of Storage Spaces is terrible, there's no way you'd put hardware of this calibre under it.pinoyians - Monday, November 16, 2020 - link
totally agree. Nuts to be implementing it on Mission Critical settings.charlesg - Monday, November 16, 2020 - link
Agreed on storage spaces. I used it for ~40TB of data (parity configuration) that worked pretty well for a few years. Then the May "spring update" caused corruption to parity storage spaces. Their solution? Make it read only. I think it was finally fixed in August, which is WAY too long for something like that.Tomatotech - Friday, November 13, 2020 - link
At around $100+/TB, 32TB is $3,200, and 64TB is $6,400.The price of the card is around 15%-20% which is affordable. In that kind of system, running 20-28GB/s, the card has to be ultra-reliable with fast support for any bugs. That's what costs.
This might be the first time I've seen a single consumer card (including GPUs!) coming close to maxing out the PCIe 4.0 interface. Just in time for PCIe 5.0 in 2021 or 2022.
rpg1966 - Friday, November 13, 2020 - link
Yes, but apart from the switch chip, the board must cost pennies to make. I understand how pricing works, but it still seems weird.Spunjji - Monday, November 16, 2020 - link
Only if you exclude the costs of design, prototyping, readying for production, writing software, testing and validation, packaging, and marketing.They probably won't be selling hundreds of thousands of these, so those are going to be a bigger proportion of the cost than for, say, a mid-range GPU.
alpha754293 - Thursday, November 12, 2020 - link
So...it's a race to see how fast you can burn through that write endurance limit of the M.2 NVMe SSDs?(U.2 is a bit better, but it's also at least DOUBLE the cost, if not more. Even with a U.2 NVMe SSD that as a write endurance of 10 DWPD, you can still burn through that fairly quickly if you're working with multi-GB video files, constantly.)
(Depending on the resolution, colour depth, frame rate, and duration, you can get a pretty good estimate as to how many video streams the drive can handle within its write endurance limit, otherwise, you're looking at "premature" drive failures.)