One of the items that makes a motherboard immediately standout is the amount of memory slots it has. For mainstream platforms, having two or four memory slots, for dual channel memory at one DIMM per channel (1 DPC) or two modules per channel (2 DPC) respectively is normal. If we saw a motherboard with three, it would be a little odd.

We’ve seen high-end desktop platforms have three (Nehalem) or four (almost everything else) memory channels, so we see either 3/4 or 6/8 memory slots respectively for 1 DPC and 2 DPC. When moving into server hardware, Intel’s Xeons have six channels, while AMD’s EPYC has eight channels, so 6/8 and 12/16 for 1 DPC and 2 DPC are obvious.

So what happens when a motherboard displays a different number of memory slots than expected? This is what happens with the new ASRock Rack ROME6U-2L2T motherboard. It supports AMD EPYC processors, both Naples and Rome, which have eight channel memory. Even at 1 module per channel, we expect a minimum of eight memory slots. But for this motherboard, there is only six.

This means that this motherboard, at best, runs in six-channel mode. With a central IO die, like Rome, this means that overall memory bandwidth is reduced by a quarter (or a fourth). For Naples, this gets even weirder, because there is no central IO die, and it means one chiplet out of the four does not have access to memory, similar to how Naples-based Threadripper systems had two chiplets unable to directly access memory.

By why in the world would a motherboard vendor do something like this. It seems counterintuitive, and needlessly hamstrings the processor performance, right?

There are a range of reasons why this is a good idea. For a start, reducing the overall bandwidth (at least, in a Rome CPU) would only affect environments and software setups that are peak memory bandwidth bound. It also reduces the cost of fully populating the system with memory, and it reduces the power consumption enough that perhaps alternative cooling can be considered.

In the situation of this ROME6U-2L2T, the other factor is size. ASRock is putting this big EPYC socket in a micro-ATX form factor, and in order to make the most of the 128 PCIe 4.0 lanes on the motherboard, wanted to put a full x16/x16/x16/x16 layout on the PCIe side. In order to do this, the only way to give enough space for the PCIe slots was to reduce the total amount of memory slots available to the CPU. It’s an interesting tradeoff, likely made at the request of one of ASRock Rack’s customers.

Another angle to consider is a customer with an upgrade path. If a customer is moving from a single socket Xeon platform with six memory channels filled at 1 DPC, then upgrading to AMD with a motherboard that only has six memory slots means the memory can transfer over without any additional cost. This is an outside possibility, but one nonetheless.

As for the motherboard, it actually comes very well equipped. Those four PCIe slots are indeed all PCIe 4.0 x16, and the board has support for 14 SATA ports (12 via mini-SAS), three U.2 ports apparently capable of PCIe 4.0 x8 support, dual 10 gigabit Ethernet ports (10GBase-T via Intel X710-AT2), and two regular gigabit Ethernet ports from Intel i210-AT controllers. Onboard BMC is provided by an ASPEED AST2500, which also has a D-sub 2D video output and a dedicated network interface.

There are also six four-pin fan headers, which is a lot for a board this size (although they’re all in the top right corner), as well as a TPM header, a COM header, a USB 3.0 header, a USB 2.0 header, and USB 3.2 Gen1 ports on the rear.

For motherboards with abnormal configurations, this ROME6U-2L2T isn’t the first. Here’s a look back through a few that I remember.

 

The GIGABYTE EX58-UD3R was a Nehalem motherboard with a very odd 3+1 configuration, for tri-channel Nehalem processors. This meant that one channel could be substantially bigger than the others, leading to a memory bandwidth imbalance. This was incidentally the first HEDT motherboard I ever owned.

ASRock has form in this game, with the X99E-ITX high-end desktop platform, which should have support for four channel memory. However because of the mini-ITX sized motherboard, and the slim ILM socket being used for the high-end desktop processors, ASRock only put in two DDR3 memory slots on board.

It also came bundled with its own CPU cooler, because ILM coolers weren’t that popular at the time. The reason ASRock went for full-sized memory was unclear, because with the X299E-ITX/ac edition, the same concept was used but with four channels of SO-DIMM. SO-DIMM DDR4 was notoriously hard to source at the time, making the use of the motherboard limited.

You can read our review of the X99E-ITX/ac here!

Finally, another ASRock play – the Xeon Scalable platform has six memory channels, and similar to the previous mini-ITX motherboard, ASRock Rack produced a motherboard with four SO-DIMM slots.

There is literally no space so put six slots on this board. Or so you would think. ASRock also released a six-module version, having to put two of the memory slots on the rear of the mini-ITX motherboard.

 

As for the ROME6U-2L2T, pricing and exact release date is unknown, as this is officially a product from the ASRock Rack side of the company. Interested parties are instructed to contact their local ASRock server suppliers.

Related Reading

Comments Locked

27 Comments

View All Comments

  • DanNeely - Monday, June 8, 2020 - link

    Boards like this show one of the reasons why we're never going to get HEDT level connectivity in mainstream platforms: The jumbo sockets needed to cram that many IO pins in are too big to fit on anything smaller than full ATX without various weird to grotesque compromises.
  • romrunning - Monday, June 8, 2020 - link

    If the PCIe slot nearest the CPU was dropped, is it possible they could have added two DIMM slots for the full 8 channels supported by EPYC?
  • Kevin G - Monday, June 8, 2020 - link

    I'm not sure that that PCIe slot is the first limiting factor (it'd still likely need to be removed or reduced down to say 4x). Look at the mounting holes for the motherboard: there *might* barely be enough room to fit two more slots in between them. It looks likes some VRM components would have to move too.
  • bernstein - Monday, June 8, 2020 - link

    just a quick look at asrockrack's rome8-2t shows that 8 ddr4-dimm-sockets are possible on epyc with a full pcie x16 slot 7 (the first if counting from the cpu). https://www.asrockrack.com/general/productdetail.a...
  • ipkh - Monday, June 8, 2020 - link

    HEDT by definition will never be mainstream. Any mainstream user doesn't need such high IO requirements and HEDT motherboards areca mixed bag with regard to pcie slot layout.
    Not to mention that AMD is just as stingy on Pcie bandwidth from the CPU as Intel. Pcie 4.0 doesn't mean much if you're still restricted to 16 lanes and you can't trade them for 32 Pcie 3.0 lanes.
  • Lord of the Bored - Monday, June 8, 2020 - link

    Why would you only have 16 lanes, or even 32? This is an Epyc board. 128 lanes at the socket(which I believe is twice what Intel's best offer is).
    Even if you halve that because PCIe4(skeptical), that's still 64 lanes.
  • Santoval - Wednesday, June 10, 2020 - link

    There was no halving of PCIe lanes due to the switch to PCIe 4.0 Rome has 128 PCIe 4.0 lanes and the Zen 2 based Threadripper has 64 PCIe 4.0 lanes. The same number as the previous generation but with double the bandwidth per lane.
  • Santoval - Wednesday, June 10, 2020 - link

    The article is about an EPYC motherboard, which is a server motherboard with 128 PCIe 4.0 lanes. Threadripper also provides 64 PCIe 4.0 (or 3.0 lanes) is it is not "stingy" either. Ryzen could well be considered stingy, with 24 PCIe 4.0/3.0 lanes in total from the CPU, of which only 20 are usable, 4 of which are intended for an M.2 SSD and 16 for a graphics card. What does Ryzen have to do with EPYC though?
  • Samus - Monday, June 8, 2020 - link

    It isn't just the sockets. A lot of the legacy connections of modern PC's have got to do. I'm looking at you 24+8 ATX power connectors!
  • DanNeely - Monday, June 8, 2020 - link

    ATX12VO (12 volt only) is taking aim at the 24pin. It removes all the legacy power from the 24 pin connector (mobo has to make 3.3v for PCIe, 5V for USB, and both for sata) replacing it with a 10pin connector with 50% more 12V capacity than the ancient 24pin, and removing all legacy voltages from the PSU. It's both less ambitious than it could have been (higher PSU voltages for less total wires needed), and somewhat sloppy in the short term (new power headers on the mobo mean that in the short term it doesn't free up much if any PCB area) and not forward/back compatible with conventional ATX (vs a 12pin connector that would've had a 4th ground, 1 each 3.3V and 5V, and had the sata connectors still on the PSU). This is because Intel just created a standardized version of what Dell, HP, Lenovo, etc were already doing for their desktop systems to simplify supply chains and allow smaller OEMs to join in on the savings.

    The issues related to SATA power being a klunky hack should go away over the next 3-8 (guess) years as SATA transitions from being a feature check on almost all boards to being dropped from chipsets with anyone wanting to build a storage server needing to install a SATA card (presumably a next generation model designed for the 12VO era that also provides SATA power hookups).

    The whole thing is a bit of a mess in the short term; but the blame should go to the people who were asleep on the switch and didn't create a reduced to 12/14 pin main ATX connector 10-15 years ago after both Intel and AMD moved CPU power to the 12V rail and the amount of 3.3/5v available became far in excess of system needs.

Log in

Don't have an account? Sign up now