Late last month the AnandTech Forums community team held a two-day AMA session with Intel's Optane technology team about their Optane products and 3D XPoint technology in general  The community team ended up getting a number of excellent questions on the subject, so we wanted to post a recap of the AMA in a more accessible location. Thanks again to both the Intel team for taking the time to answer reader questions, as well as the AT Forums community team for organizing this.

Intel Optane Technology Team

  • Bill Leszinske, Intel Corporate Vice President, Strategic Planning, Marketing, and Business Development.
  • Chris Tobias, Director, Intel Optane Technology Acceleration Team.
  • James Myers, Director, Data Center Storage Solutions Architecture
  • Avinash Shetty, Senior SSD Strategic Planner and Product Line Manager
  • Roger Corell, Marketing Manager

Q: My questions: being byte-addressable, is power loss protection required on Optane drives? If so, why? Does the controller buffer use data? Any chances to have smaller, m2 drives with power loss protection? If not required, why do the Enterprise Optane drives have supercap technology?

A: Our Optane drives do have power loss protection to protect customer data. We have released a 100GB M.2.

 

Q: Optane is built on a process unlike any of Intel's other products. Are there any plans to fab non-storage ICs on the Chalcogenide process?

A: We can build 3D NAND and Optane Technology in the same Fab.

 

Q: Now that you are no longer collaborating with Micron on developing 3DXP/Optane after Gen2, do you plan to fab it at the Dalian Fab in China as well as your 3DNAND?

A: We can build Optane technology in multiple factories, but we're still building it in IMFT--which is still jointly owned with Micron.

 

Q: Do the M2 version and Optane 905 provide full power loss protection, like the X4800?

A: The power loss protection capability is built into our Enterprise SSDs. This includes the AIC, U.2 and M.2 form factors.

 

Q: Are you working with any database providers (Microsoft, PostgreSQL, etc.) to help them take full advantage of Optane's distinctive characteristics (latency/mixed workload perf/low queue depth perf)? Very broadly speaking, how do you currently expect Optane DIMMs' bandwidth and latency to compare to DDR4? Do you anticipate it being possible to create a RAID of Optane DIMMs? Is there anything else that you want to say about Optane and database performance?

A: Databases are a great fit for Optane technology. Yes, we are working with all the major database companies. Here is an example of our work with the Developers of MySQL. You can try it yourself here.

We are building a developer community for optimizing software for Optane SSDs with our partners at Packet. You can join this community on Slack. Details are here.

Similarly, we are growing a developer community for optimizing software applications to take advantage of Persistence, aligned to the Intel Optane DC Persistent Memory DIMMs. More info here.

 

Q: How well do Optane SSD drives perform for long term file storage? Is the performance better than NAND for this application? Is there an expected failure time for long term storage of dead files? If I save a file and don't access it for three to five years, will it still be there when I do need it?

A: Optane SSDs have the most benefits for acceleration application for dynamic data. Optane data persistence is the same as any other enterprise SSD, having data retention specifications of a minimum of 3 months in a power-off condition and end-of-endurance life. Optane endurance is significantly higher than NAND SSDs with a capability of up to 60 drive-writes-per-day, as compared to 3 drive-writes-per-day with Intel's highest production NAND SSD today. NAND or HDDs may be better aligned for storing static, cold data.

 

Q: Are there any plans to create Optane cache RAID controllers? Eliminating the need for a BBU jumps out as a major benefit right off the bat.

A: I recommend you take a look at Intel Virtual Raid on CPU (VROC). VROC both reduces TCO and accelerates performance. It works well with Optane SSDs today. Take a look here.

 

Q: I use Optane quite a bit but in configurations that are not officially supported. For example, a 2TB SATA SSD paired with the 800P has both better performance and a better price than high end NVMe 2TB SSDs. Is there a reason this is not officially supported? Will configurations like this ever be officially supported?

A: It is officially supported. You can find out more here.

 

Q: The real-world performance benefits of Optane in consumer workloads are said to be less than they could be because Windows and other consumer apps don't know how to use Optane. Is there a timeline on when there will be a generational SOFTWARE performance boost for Optane users? Will the software performance boost be bigger than the boost from new generations of hardware?

A: We're continuing to work with all major OS providers and software vendors to enhance software performance and remove storage I/O latency. The ecosystem will continue to improve software to take advantage of hardware. Software development is a continuous evolution. Some public examples are Star Citizen and DaVinci Resolve. Check out our keynote from Computex for more info.

 

Q: Do you see Optane replacing NAND based SSD anytime soon? Do you see it getting cheaper by economy of scale?

A: Intel has announced QLC NAND SSDs. We see those as growing the NAND storage market aggressively, as an alternative to HDDs. We believe a tiered data strategy of Optane as cache/journaling/meta-data combined with QLC to reduce total cost of ownership of data storage is a great approach. It delivers the most value for customers through performance and cost-effectiveness.

 

Q: The optane 900p and 905p pcie cards both have a pcie 4x connection and are somewhat bandwith limited by that. Will there be optane pcie cards with pcie 8x or even 16x connections to have all the benefits of optane while also having improved bandwidth for large file transfers?

A: There are third-party solutions enabling multiple SSDs to be aggregated into a high-bandwidth slot. In addition, the PCIe 4.0 spec has been announced, which will bring higher bandwidth to future platforms and SSDs.

 

Q: Specifically I would be interested in the ability to use a consumer Optane drive as both a cache and a regular storage drive (perhaps via partitioning): For example, with a hypothetical 256GB Optane drive reserving 64GB of it to act as a cache while the rest acts as a regular drive.

A: We don't support partitioning the drive to be used as a cache and storage. We currently don't have plans to support this in the future. As to your second question, we are exploring this possibility for a future release.

 

Q: Are there plans to expand the cache size? Are there any plans for consumer level targeted products using the u2 form factor?

A: Yes--and in time densities will continue to increase. And, yes, we have u2 consumer-level products. You can find information here.

 

Q: Are there any figures for latency and bandwidth of the pm product? Also is there any update on the developer challenge?

A: We are growing a developer community for optimizing software applications to take advantage of Persistence, aligned to the Intel Optane DC Persistent Memory DIMMs. More info here. We previously shared some details on PM capabilities. You can find them here.

 

Q: I am asking about the future of hardware RAID controllers. With Optane DIMMs on the way it would seem that RAID controllers that support both DDR4 and Optane DIMMs could offer more flexibility than is currently available. Are there any plans to create AIO SSDs with both NAND and Optane? I am picturing a large and cheap bank of NAND paired with 64GB of Optane cache all in one device.

A: For your RAID question: In our testing, most applications when using all SSDs with hardware RAID controllers, bypassing RAID controller caches provides the optimized performance. Intel® Virtual RAID on CPU (Intel® VROC) is an enterprise RAID solution specifically designed for NVMe*-based solid-state drives (SSD). The biggest advantage of Intel® VROC is the ability to directly connect NVMe-based SSDs to the new Intel® Xeon® Scalable Processor PCIe* lanes. It then makes RAID arrays using those SSDs without using a RAID host bus adapter (HBA). As a result, Intel VROC unleashes NVMe SSD performance potential without the complexity and power consumption of a traditional Hardware RAID HBA. We believe hardware RAID controllers don't provide the best value with NVMe SSDs.

 

Q: If Optane Memory is used as a stand-alone data drive, can it be used with any system? Say I put it an NVMe-to-USB external enclosure. Does Optane Memory, as a data drive, work on older or otherwise unsupported systems?

A: Yes, any system that has been shown to support a standard NVMe SSD should also likely to work with using Intel Optane Memory device as an NVMe SSD.

 

Q: Are there any plans related to LightNVM / OpenChannel based product?

A: The media used in an Optane SSD is quite different from NAND... NAND media is erased in large blocks, and written in smaller pages, where you must erase before you write. Because of the need to erase a large area before writing a small area with NAND, this forces the need for significant spare area, and garbage collection to clean up and recover this space. The main benefits of Open NVMe is to enable the host to control when garbage collection events happen, which in theory should allow more of the spare capacity to be used and minimize garbage collection performance disturbance with NAND. Optane media is a write-in-place media, meaning there is no need to erase data before writing. This means that Optane SSDs do not have the same concept of garbage collection. This is one of the reasons Optane SSD performance is nearly the same for reads, or writes, or with a mixed workload. It is also the same reason why Optane SSD latency variability is so significantly better than an NAND SSD. While it's not clear there's a media benefit, we'll continue to evaluate whether this makes sense.

 

Q: How much does the current PCI Topology of Intel consumer platforms affects the performance, latency and scalability of multiple Optane units?

A: Direct CPU attach for single devices allows latency reduction by removing the PCH latency. We can measure at the hardware level but it may not show end-user application benefit due to software overhead. The real benefit of CPU attached storage is with multiple devices and using RAID. As you highlighted, direct CPU attached can provide benefit when multiple devices are connected via RAID to get higher Sequential Performance.

 

Q: Besides the Optane SSD units themselves, how important you consider the entire ecosystem of support adapters and accessories? How much it is estimated that they indirectly increase Optane (Or other NVMe SSDs) prices and adoption?

A: We recently announced the M.2 905P with capacity up to 380GB and continue to work with the ecosystem to take advantage of the form-factor to create M.2 adapters (up to 4x M.2) similar to the ASROCK card which you mention.

Source: AnandTech Forums

Comments Locked

30 Comments

View All Comments

  • surt - Sunday, October 14, 2018 - link

    Intel is at the cutting edge of AI research. There's actually a pretty solid chance these answers _were_ provided by robots.
  • abufrejoval - Thursday, October 11, 2018 - link

    What I read between the lines of Intel answers:
    Hardware RAID no longer makes any sense. XOR-Offload and other logic are better dealt with by the CPU. What remained was draining a BBU backed DRAM cache after a power recovery, because that either involves heavy OS rework or the smart controller.

    With writeback cache fully committing to cache line phase change NV-RAM instead of BBU backed DRAM, any capacitors are really just about finishing the current block and perhaps another n-copy block of initial metadata for the NV-RAM journal: That is OS transparent enough and were no longer dealing with IDE interfaces even at boot.
  • abufrejoval - Thursday, October 11, 2018 - link

    missing "granularity": 'cache line granularity phase change', want edit. Just want to high-light, it may be byte addressable but AFAIK all memory controller operations are actually at cache line level, including some write amplification for a theoretical byte update.
  • woggs - Thursday, October 11, 2018 - link

    That all seems about right.

    Byte-addressable does not mean individual byte-alterable directly because there exists a minimum write granularity for ECC parity and encryption. Altering a byte means reading that whole thing, changing the byte and reapplying ECC and encryption.

    And since write granularity is large, possibly needing a read, some amount of power loss protection is required, no matter how fast or good the media. Nothing can be instantaneous. And I would bet there is a pipeline of data flow through the controller so it's not just one thing at a time to be concerned with. That can be seen in the reviews on this site that show Q depth still affects bandwidth on the optane SSDs... Matters a lot less than a nand SSD but still matters.
  • hpvd - Thursday, October 11, 2018 - link

    nice summary!
    Some links to intel are broken like in the answers to
    Q: Are there any plans to create Optane cache RAID controllers?
    Q: I use Optane quite a bit but in configurations that are not officially supported. ...
  • Ryan Smith - Thursday, October 11, 2018 - link

    Thanks!
  • abufrejoval - Friday, October 12, 2018 - link

    3 month unpowered retention time: I guess the major takeaway is that surprises tend to be least appreciated in storage. And storage which forgets when it’s not used is far more ‘biological’ than most of us like in a computer, which is mostly about complementing our talents.

    While I have abstractly known that consumer and enterprise SSDs have rather distinct standardized retention times, I have always wondered how you would have to compensate for that: Should you actively re-write logical data blocks at a minimum of said retention time? Because in non-cache usage there would be blocks, that might never get rewritten normally during normal usage e.g. the master boot block or similar r/o master data even in an otherwise very active database.

    Should you do patrol reads or at least regular full disk reads to catch failing blocks and have them rewritten or is that something that an SSD would do on its own?

    Would it have to be powered on to do that sort of background maintenance or does it need to be told that six months have passed since the last power-on so it can gauge its housekeeping accordingly?

    Sounds like a really nice topic for an in-depth coverage of the type that sets Anandtech apart :-)
  • sendai - Friday, October 12, 2018 - link

    "Our Optane drives do have power loss protection to protect customer data. We have released a 100GB M.2."

    Are they talking a out the 800P here or another product? As far as I've heard the smallest 905P will be 380GB.
  • emvonline - Monday, October 15, 2018 - link

    question: is optane pm read latency close to dram? can i write to pm directly using rdma?

    answer: we do think the redsox will win the world series. and we like the cloud

    seriously ... its like watch a political debate

Log in

Don't have an account? Sign up now