Want to read our news about this event? Go here!

https://www.anandtech.com/show/12828/intel-launches-optane-dimms-up-to-512gb-apache-pass-is-here

 

 

12:34PM EDT - It's a Live Blog! More Optane in bound, looks like Apache Pass.

12:35PM EDT - Time for an unexpected Live Blog. I'm in San Francisco at Intel's building discussing Persisteant Memory

12:35PM EDT - Almost three hours of talks this morning, first up Lisa Spelman covering the Data Center market

12:36PM EDT - Drivers revolve around latency - the ability to compute and respond as quickly as possible

12:36PM EDT - Intel is more than a CPU company

12:37PM EDT - A more holistic value of performance across compute, storage, acceleration and connectivity

12:37PM EDT - Acceleration includes new instructions for DL training and FPGAs

12:37PM EDT - Connectivity about removing bottlenecks around networking: Omni-Path, Silicon Photonics

12:39PM EDT - $54B TAM in 2017, $73B TAM in 2022, MSS estimate at 35%

12:39PM EDT - >Sorry, Camera trouble. No network!

12:40PM EDT - Map services have evolved as the hardware has evolved

12:40PM EDT - Bandwidth and latency and detail has grown

12:41PM EDT - Also user generated data

12:41PM EDT - Use cases will be unlocked as people start to look at data as an asset, rather than a burden

12:42PM EDT - Data generates data, compounding the benefit

12:42PM EDT - Intel has made progress in the industry, but huge amount of potential still there

12:43PM EDT - As data exists today, three tiers: Cold Tier (HDD/Tape), Warm (SSD), Hot (DRAM)

12:45PM EDT - To fundamentally think how applications are architected is difficult

12:45PM EDT - Announcements today: Optane DC Persistent Memory

12:45PM EDT - Sampling today, Revenue in 2018

12:45PM EDT - 'Crystal Ridge' ?

12:45PM EDT - Larger capacity of memory at lower cost

12:46PM EDT - Applications developed with persistent memory in mind

12:46PM EDT - Capability for lots of applications

12:46PM EDT - Working with a bunch of companies for a long time

12:46PM EDT - Microsoft, Oracle, redhat, ubuntu, SuSe, VMware

12:47PM EDT - Announcing for developers, remote access to systems with Optane DC Persistent Memory through Intle Builders Consutrcution Zone

12:48PM EDT - Available shortly

12:48PM EDT - Weeks rather than months, but no exact date

12:48PM EDT - Complimentary, but invite only. Have to submit proposal

12:49PM EDT - Capacity sizes for developers to be covered later

12:49PM EDT - Currently today, Intel offers Caching and Fast Storage tier, and Mainstream tier

12:50PM EDT - Larger capacities at more affordable price points

12:50PM EDT - Bridging the gap between the three tiers

12:51PM EDT - Optane DC Persistent Memory below DRAM, Optane SSD above SSD

12:51PM EDT - Intel QLC incoming

12:52PM EDT - Alper Ilkbahar, CP/GM of DC Memory and Storage on the stage now

12:52PM EDT - Discussing more technical details

12:53PM EDT - 'We want you to leave today with the same level of excitement as us'

12:53PM EDT - >As long as we leave with a sample... :)

12:54PM EDT - Reiterating the Mind The Gaps heirarchy

12:54PM EDT - Cold/Warm/Hot tiers are not continuous, they have their own specific reasons for being the way they are

12:55PM EDT - DRAM is a good fit for fast latency and small granularity

12:55PM EDT - Speed comes at the cost of implementation, and limitations on capacity

12:56PM EDT - Storage tier is accessed through different mechanisms to memory

12:56PM EDT - 10000 cycles, loses cache due to context switch

12:57PM EDT - Data returns in blocks, not bytes

12:57PM EDT - 3 orders of magnitude

12:58PM EDT - Persistent memory bridges the gap

12:59PM EDT - Offer a path through existing storage drivers, or allowing applications access into the hardware and bypass

12:59PM EDT - Intel is working on the latter

12:59PM EDT - 'It acts like a true memory'

12:59PM EDT - Signifies compute significantly

12:59PM EDT - DRAM-like speeds on DDR bus. Hardware accessible, byte accessible, large capacity, lower cost

01:00PM EDT - >(sorry, pictures are SUPER slow to upload)

01:00PM EDT - 3 SKUs product at launch - 128 GB, 256 GB, 512 GB

01:01PM EDT - Essentially a hardware memory accessed with load/store instructions

01:01PM EDT - DDR4 pin compatible

01:01PM EDT - Compatible with next generation Xeon platforms

01:01PM EDT - Does not rely on external power or capacitors

01:01PM EDT - Uses NVM technology for persistence

01:02PM EDT - Hardware encryption on chip and advanced ECC

01:02PM EDT - Boot time encryption key

01:03PM EDT - Demo time. Database restart when database is in NVM memory.

01:05PM EDT - Scales beyond with big and affordable memory. More capacity = more containers

01:07PM EDT - Meeting the same availability uptime

01:08PM EDT - Helps systems where memory is a bottleneck

01:10PM EDT - Cassandra database example

01:10PM EDT - Optane DC Persistent Memory gives 9x more read transactions (OP/Sec), 11x more users per system

01:12PM EDT - >The image looks like it's using less DRAM but a big Optane drive

01:12PM EDT - > e.g. instead of 1 TB DRAM, use 256 GB DRAM + 1 TB Optane DC PM

01:13PM EDT - Optane DC PM allows for immediate consistency compared to traditional DRAM+NVMe

01:14PM EDT - Now a key-value test comparison

01:15PM EDT - Every now and then, DRAM has to push data out to storage, which limits the perf

01:16PM EDT - 99% perf loss on DRAM+Storage when relying on storage write-backs

01:17PM EDT - 287k ops/sec on Optane PC PM, vs 3k-279k on DRAM+Storage model

01:18PM EDT - Now Persistent Memory over Fabric (PMoF) for data replication with direct load/store access

01:19PM EDT - Pulling data from PM on another system over the fabric

01:20PM EDT - Current Replication from NAND to NAND across systems has high latency. Over PMoF using Optane DC PM, latency is much lower

01:21PM EDT - On the Database Restart test, Optane DC PM took 16.9s. DRAM+Storage system took 2100 seconds

01:21PM EDT - Moving reliability from three 9s to five 9s

01:22PM EDT - start time goes from minutes to seconds

01:22PM EDT - >Servers can take 10 minutes to start up

01:22PM EDT - Andy Rudoff to the stage. Head PM software architect

01:23PM EDT - 'Joined the project many years ago'

01:23PM EDT - Making sure ISVs can use the technology

01:23PM EDT - How far up the stack should the PM be visible

01:24PM EDT - Like a fast disk, or a fast cache, or a programmable file system

01:24PM EDT - Higher you go up the stack, the more software is needed, but the more value can be drawn from it

01:24PM EDT - Balancing act between the barrier to adoption and increasing application value

01:25PM EDT - SNIA NVM Programming Model. 30 members, co-chair by Intel and HPE

01:26PM EDT - Making sure ISVs do not get locked into a proprietary API

01:27PM EDT - Three paths - management, storage, and application

01:28PM EDT - Use cases from generic NVDIMM driver (looks like a fast drive) up to a fully configured system

01:29PM EDT - Memory mapping a file typically has context switching. PM allows for direct access

01:29PM EDT - Persistent Memory Aware File System required for direct load/store

01:29PM EDT - Intel calls this 'DAX' - Direct Access

01:31PM EDT - Data structures are still persistent in memory, saving time

01:31PM EDT - Most developers want to use DC PM with little hardware knowledge

01:34PM EDT - Persistent Memory Development Kit - PMDK

01:34PM EDT - Collection of Libraries

01:35PM EDT - Low level programming support, transaction APIs, performance tuned, open source and product neutral

01:35PM EDT - Not trying to make money on PMDK, trying to enable the ecosystem

01:35PM EDT - PMoF is covered in PMDK

01:36PM EDT - Sync replication available now, Async is a work in progress

01:38PM EDT - Windows and Linux

01:38PM EDT - VTune has been updated to understand Persistent Memory

01:40PM EDT - Free downloads of PMDK available

01:40PM EDT - Cassandra full stack example

01:42PM EDT - Changed Cassandra to use persistent Java containers

01:43PM EDT - The things that Cassandra provides doesn't change - the app on top using Cassandra doesn't need to know about the PM model

01:45PM EDT - SAP Hana has already done an Optane DC PM demo

01:45PM EDT - VMWare has done changes to Hyper-V

01:46PM EDT - Intel has done changes for Xen and posted them for upstream review

01:48PM EDT - Remote Developer Access is now in the early look stage

01:48PM EDT - Submit proposals to Intel. Ready for functional testing and use case exploration

01:49PM EDT - OK small 10-min break

02:08PM EDT - We got some photos of the modules

02:09PM EDT - 256 GB and 512 GB modules

02:09PM EDT - OK here we go again

02:09PM EDT - Bill Lizinske to the stage. Head of the NVMe group

02:10PM EDT - Two directions = Hyperconverged and Hyperscale / Disaggregated

02:13PM EDT - Discussing 3D NAND vs 3D XPoint

02:13PM EDT - Optane is great for storage, and caching in particular

02:14PM EDT - 'Optane has impressive endurance'

02:14PM EDT - Drives moving up to 60 drive writes per day

02:16PM EDT - Swapping NAND for Optane, might lower the cache size, but gives more perf

02:17PM EDT - Intel will introduce new drives at 60 DWPD, rather than what the slide says about the P4800X. That stays at 30 DWPD

02:17PM EDT - Still using first gen 3DXP

02:18PM EDT - Intel team worked for a year with VMWare to get a significant perf increase using Optane

02:18PM EDT - Wanting to work with all the hyper converged players

02:20PM EDT - Lower read latencies when taking out software-based latency

02:21PM EDT - Taking away noisy neighbor problems

02:22PM EDT - Intel Optane SSD deployments still increasing, increase in OEMs offering VSAN Ready Nodes

02:24PM EDT - Optane storage in MySQL mission critical cloud server database

02:24PM EDT - tiering the data between Optane and 3D NAND

02:25PM EDT - Always important to balance Optane/NAND ratio

02:26PM EDT - Disaggregating storage from compute allows them to scale independently as required

02:27PM EDT - Use Optane for random access, NAND for sequential access

02:28PM EDT - Making sure the local data set is in the right location for the type of access

02:28PM EDT - 'If you design the system properly'

02:31PM EDT - Disaggregation allows for tune performance and resilience

02:31PM EDT - Announcements about new Optane storage software stacks in the near future

02:32PM EDT - Now for more on 3D NAND

02:32PM EDT - NAND is all about cost

02:32PM EDT - Replace more HDDs with SSDs

02:32PM EDT - Moving to QLC

02:33PM EDT - Have been using floating gate technology throughout

02:33PM EDT - Currently shipping 64L TLC. Just gone to production of 64L QLC

02:34PM EDT - A bit of Optane and a whole lot of 3D NAND

02:34PM EDT - Now sampling QLC, production availability in H2 2018

02:34PM EDT - Moving QLC NAND into the Warm storage tier

02:34PM EDT - pushing HDDs further into the cold storage

02:35PM EDT - Using the Ruler form factor to reinvent the data center

02:35PM EDT - $10B+ TAM in 2021

02:35PM EDT - 16 TB Ruler

02:36PM EDT - Replace 256 x 4 TB HDDs with 32 ruler drives for a single 1U

02:36PM EDT - (Also, less likely to break)

02:37PM EDT - Ruler = EDSFF

02:37PM EDT - Investment in Fab 2 and Fab 68

02:37PM EDT - Fab 2 in Utah, dedicated for 3D XP. 100% converted

02:38PM EDT - Fab 68 is doing all of Intel's 3D NAND

02:38PM EDT - No public numbers on wafers per month from Fab 68, but much bigger than Fab 2

02:38PM EDT - New fab space at Fab 68 coming online, which is 1.75x bigger than previous

02:40PM EDT - QLC will be in client drives in second half of this year

02:41PM EDT - Q&A time for a bit

02:56PM EDT - OK Q&A over, here are the questions and results

02:56PM EDT - Q) Do the DIMMs require next gen CPUs? A) Yes

02:57PM EDT - Q) You mentioned Intel will be revenue shipping for 2018. Will ecosystem partners be shipping systems in 2018? A) We expect systems to be shipping in 2018, but the details of specific customers will be up to them

02:57PM EDT - Q) What is Crystal Ridge? A) Crystal Ridge is the top level family of persistent memory parts, of which Apache Pass is a member

02:58PM EDT - Q) It was mentioned about DDR4 pin compatibility with the DIMMs. Can users mix and match the DIMMs on individual channels, or will there be dedicated channels? A) Not disclosed at this time.

02:59PM EDT - Q) One of the slides had a Xeon Platinum logo. Will the DIMMs only work on Xeon Platinum? A) Not disclosed at this time. We just showed pairing the best with the best

02:59PM EDT - Q) Will there be a DIMM writes per day? A) Not disclosed at this time. But DIMMs will be operational for the lifetime of the DIMM

03:00PM EDT - Q) What is the power draw of the Optane DIMM compared to regular DIMMs? A) Not disclosed at this time.

03:00PM EDT - Q) Does this mean the memory limits of future Xeons are being increased? A) Not disclosed at this time.

03:00PM EDT - Q) Will all future Xeons support the Optane DIMMs? A) Not disclosed at this time.

03:00PM EDT - Q) Clock speeds of the DIMMs? A) Standard DDR4 speeds. Other details not disclosed.

03:01PM EDT - Q) Will the QLC NAND adhere to JEDEC data retention. A) Not disclosed at this time.

03:01PM EDT - That's a wrap. Some discussions and customer presentations for the rest of the day.

Comments Locked

28 Comments

View All Comments

  • CajunArson - Wednesday, May 30, 2018 - link

    Persistent memory event?

    This live blog will self-destruct in 5 seconds.
  • A5 - Wednesday, May 30, 2018 - link

    So this is the product form of the Optane DIMMs they showed all those years ago? Neat, it'll be fun when I can get one at home in like 5 years :P
  • HStewart - Wednesday, May 30, 2018 - link

    I think is different then today's Optane stuff. It sound like 512G cards means that it added to system's ram plus under special control - allow to be persistent - which means a huge multi-G database could be store on and have instance loading and ram like speeds.
  • haukionkannel - Wednesday, May 30, 2018 - link

    Yep, these replace the ram. Very different than normal ssd variants!
  • nonoverclock - Thursday, May 31, 2018 - link

    One clarification: they supplement the memory. This is a new layer of the memory hierarchy.
  • HStewart - Thursday, May 31, 2018 - link

    But they are different than normal ram - it is persistent - which would be good for larger database which faster loading and access.
  • rocky12345 - Wednesday, May 30, 2018 - link

    Would this be why we are getting the likes of Dell starting to now advertise a system with 24GB memory which is actually 8GB DDR4 and 16GB Optane storage but they are trying to make it look like the system has 24GB memory installed. There was a write up about it on PCper and he said he has mixed feelings about the way they were advertising the system as having 24GB memory installed when in fact it was only 8GB and that it was a bit misleading.
  • DanNeely - Wednesday, May 30, 2018 - link

    This is completely different in how it appears to the OS. The optane in the dell is an SSD cache.

    It's possible it inspired the scumbag marketeer to start listing it that way, but it's every bit as much lying scumbag marketing as if they were to add the 8gb of DDR4 and 11GB of GDDR5X in a 1080 ti and claim 19 GB of memory.
  • rocky12345 - Thursday, May 31, 2018 - link

    Yes you are right after reading more into it from other sites as well as Anandtech's write up here today I see that it would be impossible for Dell to have this setup in their system. I am thinking some marketing person probably got their facts messed up and just wrote out bad spec's or maybe it was on purpose to confuse customers.
  • wumpus - Saturday, June 2, 2018 - link

    The sad thing is that it would probably work if you just set the drive as permanent swap file. On the other hand, the type of person who would fall for such a trick probably would see a tiny speed up.

    Don't tell me: the thing also has no SSD. So even 16GB isn't enough to cache the hard drive, all the while having to page virtual ram through the optane cache. So it's a disaster as well as false advertizing.

Log in

Don't have an account? Sign up now