Home
News
Products
Corporate
Contact
 
Wednesday, November 27, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Samsung propose NVMe for Data Centers


Friday, May 26, 2017

We are in the midst of a remarkable electronics era, where consumers are seeking immediate delivery of services, including data access. Companies that can rapidly distill important information from the vast ocean of data and provide a useful service to consumers are thriving. As testament to the value of information over hard assets, the success of the world's most valuable taxi company does not depend on owning any vehicles (Uber); the success of the world's largest bookseller does not depend on owning any bookstores (Amazon); and the world's largest resource for research does not own any libraries (Google).

Memory hierarchy

Services such as these have been made possible by major advances in computing power. Data centers are filled with computing systems that analyze lots of data rapidly. However, there is a tradeoff between how much data you want to process versus how quickly you want to process it. This tradeoff is illustrated by the memory hierarchy shown in Figure 1.

DRAM is the quickest form of memory, but is limited in how long it can store data, which is the reason it is called a volatile memory. Hard disk drives (HDDs) provide lots of capacity, but are much slower. The real improvements have been with solid state drives (SSDs), which are based on NAND flash memory. The industry has made tremendous gains in performance, capacity, and cost-effectiveness in recent years by moving to V-NAND[1] as well as shifting to the NVMe (non-volatile memory express) protocol that uses the PCIe interface.

Why NVMe

When it comes to performance, it is important to discern between bandwidth and latency. Bandwidth relates to the throughput enabled for multiple tasks accessing storage. Latency, on the other hand, is associated with the storage response time for a single task. The industry is addressing the challenge in achieving high bandwidth and low latency via a highly scalable interface called Peripheral Component Interconnect Express (PCIe) coupled with the NVMe storage protocol. NVMe is designed from the ground up to unleash the blazing speed of NAND flash memory. An SSD using a PCIe x4 interface (featuring NVMe) can connect to the host computer at speeds up to 32Gbps, which is about 5x faster than the SATA interface commonly used for storage drives. The NVMe protocol, being optimized for flash memory, also reduces latency by over 3x compared to the SATA protocol.

To illustrate and monetize the value of low latency, Amazon has determined that every 100ms of latency cost it one percent in sales. Google has determined that an extra 500ms in search page generation time dropped traffic by 20 percent. Another study showed that a broker could lose $4M per millisecond if his or her electronic trading platform is 5ms behind the competition.[2] In order to gain a competitive edge, most of the leading data center managers are now transitioning their storage requirements to using NVMe SSDs.

PCs and hyperscale data centers

The PC market has seen the first wave of NVMe SSD adoption, as reportedly, many Apple[3] and Microsoft[4] notebooks currently on the market ship with NVMe SSDs. The rest of the PC market is quickly transitioning to NVMe as well. The millions of units that ship in the volume PC market bodes well for the chosen NVMe protocol, which is now being leveraged into the enterprise space. The second wave of NVMe adoption is currently happening with servers. Figure 2 below illustrates the quick transition to NVMe in servers, particularly those deployed in hyperscale data centers (shown in light blue). Hyperscale Data Centers can scale compute, optimize memory, and store data seamlessly as increased demand is added to a system. Public cloud infrastructure providers tend to use this architecture. A characteristic of hyperscale Data Centers is that they tend to take advantage of the latest technology and evolve very quickly, as evidenced by the dramatic transition to NVMe now under way.

Next NVMe Wave

The next wave of NVMe adoption will take place in external storage arrays used in traditional data centers, where legacy considerations as well as dual-port redundancy is required for the individual drives. External arrays today primarily use SAS drives, which are dual-ported and have speeds up to 12Gbps. There are industry plans to introduce 24Gbps SAS, which may extend the life of the interface. However, it would be difficult, even then, to match the scalability and latency of NVMe drives. Figure 3 below summarizes the flash transformation occurring in the enterprise. Businesses are switching from using systems designed with HDDs in mind to the flash-based architectures that will provide the best performance for the real-time, data-intensive workloads now becoming the norm in many industries.

NVMe, bandwidth and latency

Leading companies today are pushing the envelope in how quickly they can leverage increasingly larger pools of data for their customers. The computing platforms they rely on need to become faster, particularly with regard to the memory and storage that handle this data. DRAM is the fastest form of memory outside of the CPU, but is limited in scalability as well as being expensive. Many CIOs, CTOs and other data center decision-makers have taken note that NVMe SSDs address the bandwidth and latency challenges very nicely for a host of applications, including real-time analytics, high-frequency trading, and artificial intelligence.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved