Home
News
Products
Corporate
Contact
 
Wednesday, November 27, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Is 3D flash storage going to take over at data centers?


Friday, September 22, 2017

Over the last few years, the enterprise data center has been embracing the flash storage medium as a core component of modern infrastructure. With nearly 60% of organizations using flash storage already, it's clear this trend is here to stay.

As much as enterprise IT has embraced flash storage, we have only scratched the surface of the ever-changing storage world. An emerging market for persistent memory has been forming and generating a lot of hype, but how does that market look today?

To understand the market, we need to understand how we got to where we are today. The last few decades have had near linear growth of CPU performance and almost flat growth of storage performance. The advent of NAND flash helped close the gap, but CPU remains woefully underutilized due to legacy storage protocols and software. Persistent memory hopes to bridge that performance gap in the enterprise data center, but in any immature market, there is a lot of hype and market confusion to be wary of.

Before we can understand the state of persistent memory, we need to know a little bit about the technology. Namely, we need to understand that it isn't a product or even a specific implementation of technology. Simply put, persistent memory is persistent storage with latency low enough that it could be used as memory. It's fast like memory and persistent like storage. Lots of emerging and new technologies fall under this category, but to be persistent memory, it must have a few characteristics:

Appear as a byte-addressable medium from a programming point of view. This means it looks like memory with a virtual address range and not a hard drive with logical block addressing.

Uses load/store and not read/write for data access. It should appear as a memory device that just happens to be persistent.

Have extremely predictable latency.

The concept is simple enough, but it gets complicated when a single device can be both persistent memory and block storage. The most common commercially available device that can be persistent memory is a nonvolatile dual in-line memory module (NVDIMM). Currently there are three types of NVDIMMs, and each has different characteristics:

NVDIMM-N is just memory-mapped dynamic RAM (DRAM) with onboard flash as a persistent layer. Think of this as DRAM backed up by the onboard flash. When most people say NVDIMM, they are talking about this type of device.

NVDIMM-F is memory-mapped flash. It's similar to a solid-state drive in that it has block access, but it avoids the latency of a hard drive because it sits in the memory channel.

NVDIMM-P is a combination of memory-mapped DRAM and memory-mapped flash, giving us the best of both worlds.

And if the taxonomy wasn't convoluted enough, we have nonvolitile memory express (NVMe) which has nothing to with persistent memory at all.

Moving data as close to the CPU as possible allows us to get the best performance for our data and maximize CPU usage on a system. Systems like SAP HANA, high-performance computing and big data applications are driving demand for this type of innovation. The problem is applications are designed to use block storage for data persistence. To get around that problem, we have to either refactor the applications or insert a file system between the persistent memory and the application. To really get all the benefits of persistent memory, the software has to be designed to take advantage of it.

Let's talk about the elephant in the room: Intel 3D XPoint. When Intel and Micron first promoted 3D XPoint -- now called Optane -- it was touted as being 1,000 times faster and more durable than NAND, and 10 times the density of DRAM. The first Optane products in the market are, essentially, PCI-based caching devices that accelerate existing hard drives, and not exactly persistent memory. This means they haven't lived up to the massive initial marketing claims. However, expect this to change as Intel will be shipping Optane DIMMs in the future. Intel's 3D XPoint isn't the only game in town either; Samsung has its custom-designed Z-NAND, and Hewlett Packard Enterprise (HPE) has its non-silicon RAM-like memristor technology.

For any persistent memory technology to really take off, it's going to need strong support from server manufacturers and operating system makers. With Dell, HPE and Super Micro either planning to or already supporting persistent memory, we can presume the technology is going to stick around. Linux, starting with kernel version 4.2, has had strong support for persistent memory. Windows Server 2016 has strong support as well. Both have native persistent memory file systems as well as libraries for developing applications with direct, persistent memory access.

As with any new technology, the cost is a huge factor. NVDIMMs, which have some overlap with the NAND flash manufacturing process, may be cheaper than Optane DIMMs in the initial price. However, for enterprises to adopt either technology, it's going to need to hit the $1.50 to $2 per gigabyte price point. Persistent memory is in its infancy, so don't expect mainstream adoption until 2019 at the earliest.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved