Micron Announces 32GB DDR4 NVDIMM-N Modules
by Billy Tallis on November 13, 2017 9:00 AM ESTMicron is announcing today their next generation of NVDIMM-N modules combining DDR4 DRAM with NAND flash memory to support persistent memory usage models. The new 32GB modules double the capacity of Micron's previous NVDIMMs and boost the speed rating to DDR4-2933 CL21, faster than what current server platforms support.
Micron is not new to the Non-Volatile DIMM market: their first DDR3 NVDIMMs predated JEDEC standardization. The new 32GB modules were preceded by 8GB and 16GB DDR4 NVDIMMs. Micron's NVDIMMs are type N, meaning they function as ordinary ECC DRAM DIMMs but have NAND flash to backup data to in the event of a power loss. This is in contrast to the NVDIMM-F type that offers pure flash storage. During normal system operation, Micron's NVDIMMs use only the DRAM. When the system experiences a power failure or signals that one is imminent, the module's onboard FPGA-based takes over to manage saving the contents of the DRAM to the module's 64GB of SLC NAND flash. During a power failure, the module can be powered either through a cable to an external AGIGA PowerGEM capacitor module, or by battery backup supplied through the DIMM slot's 12V pins.
Micron says the most common use cases for their NVDIMMs are for high-performance journalling and log storage for databases and filesystems. In these applications, a 2S server will typically be equipped with a total of about 64GB of NVDIMMs, so the new Micron 32GB modules allow these systems to use just a single NVDIMM per CPU, leaving more slots free for traditional RDIMMs. Both operating systems and applications need special support for persistent memory provided by NVDIMMs: the OS to handle restoring saved state after a power failure, and applications to manage what portions of their memory should be allocated from the persistent portion of the overall memory pool. This can be addressed either through applications using block storage APIs to access the NVDIMM's memory, or through direct memory mapping.
Micron is currently sampling the new 32GB NVDIMMs but did not state when they will be available in volume.
Conspicuously absent from Micron's announcement today is any mention of the third kind of memory they make: 3D XPoint non-volatile memory. Micron will eventually be putting 3D XPoint memory onto DIMMs and into SSDs under their QuantX brand, but so far they have been lagging far behind Intel in announcing and shipping specific products. NVDIMMs based on 3D XPoint memory may not match the performance of DRAM modules or these NVDIMM-N modules, but they will offer higher storage density at a much lower cost and without the hassle of external batteries or capacitor banks. Until those are ready, Micron is smart to nurture the NVDIMM ecosystem with their DRAM+flash solutions.
Source: Micron
48 Comments
View All Comments
ddriver - Monday, November 13, 2017 - link
That's such a great product. Nand wears out and you throw away an otherwise perfectly good memory stick.Because I supposed it would take too much engineering genius to make the nand chip socketed.
And even more challenging to use the PCIE NVME storage to save and restore paged memory dumps.
My only criticism is they didn't put RGB LEDs on it.
Billy Tallis - Monday, November 13, 2017 - link
It uses SLC NAND, and only writes to it when there's an unexpected power failure. How many servers are expected to survive 10000 power failures?ddriver - Monday, November 13, 2017 - link
True, but it is still a dumb idea for a number of reasons. There is no point in increasing complexity. More chips - more things to fail. Pointless cost increase. A separate controller for each dimm. Low bandwidth with a single nand channel. Requires additional hardware and software support - more room for bugs.What's the point when it is trivial to do accomplish the same with a few lines of code and a general purpose nvme drive?
The question was entirely rhetorical of course. Because this way generates more profit on waste.
There is an even better solution to begin with - you don't really need a complete memory dump, there is lot of memory that will not be holding any important information. What you have to store and then restore is only the usable program and data state. Much less data written or read compared to a dumb memory dump, much shorter time to save and avoid running out of power in the middle of the operation, much shorter time before your servers are back to operational when powered back on.
extide - Monday, November 13, 2017 - link
Again, you clearly didn't read the article...ddriver - Monday, November 13, 2017 - link
What a convincing argument you make. The maker of a product says it is good and useful - unprecedented. Must be genuine.It might be a mystery for you, but it is actually possible to read something and disagree with it based on understanding. Much like your silent agreement is because as simple as all this is, it goes completely over your head, thus leading to your laughable conclusion - you are smart because you agree with something you don't understand, and I am dumb because I disagree, which is the opposite of agreeing, and the opposite of smart is dumb, therefore I am dumb.
Or maybe you have some problem with people who have higher standards than you do, therefore go for the good application of peer-pressure to force a dissident to conform. News flash - that only works when you have idiots on both sides. The less of an idiot one gets, the less one craves the approval of idiots.
ddrіver - Monday, November 13, 2017 - link
There's plenty of ways to make this product great. They should make the RAM chips socketed. The controller could just count how many errors have to be corrected in RAM and identify the offending chip, then you could easily replace it. Instead of throwing away a perfectly good RAM stick. Also they could have used some off the shelf components (a Pentium/Celeron CPU for example) to make this instead of custom FPGA. Clearly an attempt to milk stupid multi-billion dollar corporations or people like me who actually do something special with computers.They could also implement a software solution to allow you to choose the important data that should be protected.
And I know I complained a few articles ago that saving 10s every bootup is worthless but now I tend to think that saving 3-5s every time you recover from a power loss (when generator, UPS, and battery local supercapacitor all fail) could be kind of a big deal. This happens easily every few years maybe so it adds up.
edzieba - Monday, November 13, 2017 - link
"They should make the RAM chips socketed."Pfffhahaha! Tell another one!
StevoLincolnite - Monday, November 13, 2017 - link
Ram chips haven't been socketed in several decades...extide - Tuesday, November 14, 2017 - link
You are completely missing the point of these things. They are not for saving bootup time. They are to ensure you don't lose certain pieces of data if/when a server loses power, like filesystem or database journals (mentioned in the article).You need a controller on each DIMM otherwise you would need an entirely new DIMM interface/standard, that would be more complex.
The performance of the NAND (with a single package as you mentioned) doesn't matter because the NAND isn't used during normal operation. The NAND is only used when the power cuts off and then the data is copied from DRAM to NAND. For this same reason the write/erase cycles don't matter because the NAND isn't used very much.
You mention that this could be accomplished with a few lines of code and a general purpose NVMe drive. No, it can't. If the server suddenly loses power then those lines of code won't excute.
You mention that you don't need a complete memory dump, which is correct. Again, this is mentioned in the article, which you clearly didn't read. It says you would generally only use 1-2 of these DIMMS per CPU in a server and then your OS/hypervisor would need to be aware of what memory ranges exist on the NVDIMMs vs regular DIMMs.
Then you go on about a bunch of gibberish trying to insult me, in a post that frankly belongs on r/IamVerySmart.
RAM chips socketed? Why would you do that when the DIMM ITSELF is socketed. Ridiculous.
As far as using a general purpose CPU vs an FPGA, an FPGA is a lot simpler here. An FPGA can do these sort of mass data transfers much more quickly, plus it looks like they are using an Altera MAX series FPGA which means it has built in flash so it doesnt need an external EEPROM to store the bitstream. A general purpose CPU would indeed need an external flash device, so again more complicated (which you are trying to advocate against). You also mention them trying to 'milk' people, but those MAX series FPGA's are cheaper than generic Pentium/Celeron chips, and use a lot less power as well.
Again you mention that they should implement a software solution allowing you to choose the data that is backed up .. which is exactly how it works. Again, you clearly didn't read teh article. This is accomplished by where you store your data in memory, whether it is on one of the NVDIMMs or a regular DIMM.
PeachNCream - Tuesday, November 14, 2017 - link
"Clearly an attempt to milk stupid multi-billion dollar corporations..."Multi-billion dollar corporations don't become multi-billion dollar corporations by repeatedly doing stupid things or taking unprofitable actions.