Home
News
Products
Corporate
Contact
 
Wednesday, April 24, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Intel's Optane DIMMs for server so far a disappointment


Monday, April 8, 2019

Intel's long-awaited Optane DIMMs so far provide a modest boost to select apps and are not expected to bring big revenues or even profits.

Intel and Micron launched 3D XPoint with great fanfare in July 2015 as the first new memory architecture in decades. Nearly five years later, the chips are finally shipping in both solid-state drives (SSDs) and dual-inline memory modules (DIMMs), but their sparkle has dimmed.

Intel provided the first design and performance details of its Optane DIMMs and its proprietary DDR-T protocol as part of a broad release of new data center chips. The disclosures provided a glimpse into the difficulty of creating a new memory chip, a cautionary tale for rivals such as Micron and Samsung likely to follow a similar path in the future.

A laundry list of promising magnetic, resistive, and phase-change memories like Optane have been in labs for years. But carving out a new tier in a computer’s memory hierarchy for them can take many companies and many years behind the jackhammer.

Intel had to make changes to its CPUs, the DDR memory protocol, and existing DIMM designs. The biggest lift is still ongoing — getting partners to make changes in applications software to make use of persistent main memory. Changes in Linux and Microsoft server OSes are now generally in place.

Intel gets kudos for doing all of the jobs in tandem with defining and ramping production on a new memory architecture that’s still somewhat secretive. Unfortunately, its all coming together in a year when DRAM and NAND prices are plummeting, dimming the outlook for revenues and profits.

“My 2015 forecast was Intel could get up to $2 billion in revenue in the first two years of introduction, but it is likely to be significantly smaller now,” said Jim Handy, a memory market watcher with Objective Analysis. “A more realistic figure is probably 10% of that or $200 million, in part because we are going through a collapse of memory prices.”

To attract users, Intel hopes that Optane DIMMs sell at about half the price of DRAM versions, he said. But with the current costs of ramping Optane, Intel is the only memory-chip vendor currently making a loss, he added.

“The Optane DIMMs were supposed to come out in 2017 with Skylake-SP processors but were delayed,” said Linley Gwennap of the Linley Group. “Now, the maximum capacity of DRAM DIMMs have gone up and their prices are coming down significantly — the problem is that DRAM is always getting better.”

Performance benefits vary and are not dramatic

Adding to the troubles, calculating performance benefits is a complex business for Optane DIMMs because it varies both by application and across their programmable 12- to 18-W power range. For example, read latencies range from 180 to 340 nanoseconds.

Intel reported on a handful of specific use cases that it has tested, each one significant but none both broad and dramatic. For example, the DIMMs support 30% more virtual machines per server running Microsoft HyperV or can reduce costs of such servers by a third.

Similarly, users of SAP Hana can restart systems 12.5× faster or save 39% on the costs of a large in-memory database. Intel did not detail gains on some of its other target apps such as Memcached, real-time analytics, and Apache Spark.

Intel engineers said that they expect to spend a long time exploring possible uses for the new technology. Card vendors will set prices for the DIMMs, but Intel says it aims for 1.2× the performance per dollar of DRAM on target workloads.

The overall picture left analysts lukewarm.

“The performance doesn’t seem very compelling … and more performance with more memory is not a fair comparison,” Gwennap said. “There are some apps that benefit … but not at the 2× to 4× gains you typically expect from a new technology.”

It’s not clear if the story gets better over time. Intel would not comment on a roadmap for today’s 20-nm Optane chip. Handy said that Intel could move it to a 15-nm NAND process next before migrating to a more DRAM-like process.

“I’m most impressed with the corporate effort Intel made to put all the pieces in place,” Handy said. “Startups in this area forget they need to change OSes and apps.”

Intel’s experience may make rivals think twice about some memory niche that they think they can fill. If they plow ahead, the future market may look very different.

Talk today of hot, warm, and cold storage will have to expand to include tepid, lukewarm, and chilly, Handy quipped. One Optane executive foresees a complex memory market ahead with a wide variety of different products for different use cases.

A look at the modules and a glimpse of DDR-T protocol

Intel provided some interesting insights into its designs. Like solid-state drives, Optane DIMMs sport their own controller with SRAM as well as a power management IC, capacitors, and DRAM for an address lookup table.

Up to six Optane modules can be used with a single CPU, which also needs some vanilla DRAM to boot. The Optane modules need to be nearer to the CPU for signal-integrity issues.

The boards can pack up to 11 Optane media chips, some used for error correction and redundancy. They keep security keys in both volatile and non-volatile memory stores.

Intel said that the media chips will last five years of read/write cycles. However, the data is only retained about three months when not in use.

Most new Xeon Platinum and Gold versions support the DIMMs with changes to their memory units. They act like cache controllers to smartly use the faster DRAM chips like a cache, he said.

Intel said that its DDR-T protocol supports asynchronous commands and timing that give the Optane DIMM more control than traditional CPU-managed modules. Similar to vanilla DDR, it supports access to 64-B cache lines.

The DIMMs can use three programming models. They can look without software changes like more DRAM. They can be recognized as having unique characteristics short of nonvolatile memory, or they can rely on OS and app changes to look like a new tier of block storage faster than SSDs and hard drives.

“It’s not like 100% of systems will use this,” said Ian Steiner, lead architect for Cascade Lake CPUs. “Some apps don’t like it, and that’s OK. Some things don’t run better like streaming data, but a lot of good apps customers care about running better.”

“There’s no real off-the-shelf applications supporting persistence, and there won’t be anytime soon, so the near-term benefit is you can get a larger size main memory than you can with DRAM,” said Handy.

Intel will continue to work with app developers like SAP and others to change that situation. Meanwhile, companies like Google or Goldman Sachs with deep software prowess and their own internal applications could make use of the DIMMs — if they are willing to change their code and accept a proprietary interface.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved