Home
News
Products
Corporate
Contact
 
Monday, December 23, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

AI and AR Driving Memory Updates


Friday, December 15, 2023

Samsung Electronics is tackling the bandwidth demand in three ways with the release of new HBM, GDDR and DRAM offerings as part of a slew of announcements the company recently issued at its annual Memory Tech Day.

Samsung is maintaining its “bolt” naming convention first introduced back in 2016, giving its HBM3E DRAM the designation “Shinebolt,” which is aimed at powering next-generation AI applications, improving total cost of ownership (TCO) and speeding up AI-model training and inference in the data center.

In a briefing with EE Times, Jim Elliott, who heads up Samsung’s U.S. memory business, said the company is embracing a paradigm shift that moves it away from just working adjacently to the hardware platforms that include CPU and BIOS ecosystem partners and customers. “As we continue to broaden and expand our portfolio into different areas, including in the AI space, we really want to take a kind of holistic approach.”

He said this means keeping in mind operating systems, software platforms, converged services platforms and network providers so that Samsung can optimize its products all the way down to the end-use application level.

Elliott said Samsung’s HBM2E, GDDR7 and 32-Gb monolithic DDR5 reflect the company’s efforts to pivot and put “additional muscle” in its portfolio that’s supporting the current AI boom.

Indong Kim, Samsung VP of product planning and business enabling, based in San Jose, California, said the latest HBM3E memory increases compacity while improving power efficiency. Shinebolt’s speed of 9.8 Gbps per pin speed means it can achieve transfer rates exceeding up to more than 1.2 terabytes per second (TBps), he said, which the company believes is leading the industry and is better than the JEDEC standard.

To get higher layer stacks and improve thermal characteristics in Shinebolt, Samsung has optimized its non-conductive film (NCF) technology to eliminate gaps between chip layers and maximize thermal conductivity.

Kim said the evolution of HBM is moving more quickly with the proliferation of AI by increasing bandwidth increases while keeping power consumption down. Samsung is making progress on capacity in a linear fashion in response to customer needs, he said, as they are looking to handle exponentially increasing model sizes from the AI applications, while high-performance computing is adding a lot of GPUs to be used with HBM.

Although GDDR memory was developed long before the mainstreaming of AI to meet the needs of high-end gamers and graphics-intensive applications, its ability to process large amounts of data in parallel quickly makes it well-suited for AI applications. Samsung’s 32-Gbps GDDR7 provides 1.5 TBps of memory bandwidth that can be utilized in all sorts of applications, Kim said. “It is just no longer about the gaming consoles, but it is expanding its use cases for AI.” Those use cases are increasingly diverse, he added.

Like Shinebolt, reducing power consumption in GDDR7 was a goal for Samsung as well, Kim said. A notable choice by the company was to stick with PAM3 signaling because it was the most efficient for enabling throughput for a GDDR I/O scheme, he said.

The 32-Gb DDR5 DRAM announced by Samsung also reduces power consumption by 10% compared with its predecessor, which is valuable to the company’s server customers who are looking for power savings.

Jim Handy, principal analyst with Objective Analysis, told EE Times in an interview that all three offerings from Samsung are good products, albeit not “head turning.” However, the Shinebolt does continue an exciting theme that was begun with its predecessor, Aqua Bolt, in that it’s an HBM with an AI processor built into it, he said.

Samsung is going after a market that SK Hynix has been servicing almost exclusively, Handy added. “They’re trying to figure out what’s a way that they can get in that will give them an edge.” The company’s latest HMB3 offering will help if Samsung can be the first to get qualified, he said, which is easier than being the second or third vendor to be qualified.

HBM isn’t used for just anything, as it’s a premium memory. More significantly, Handy said, it’s not easy to change suppliers because it’s a difficult high-speed interface, so much so that companies like Rambus have made a business out of helping people design their HBM systems so that the memory works well with their chosen processors. “This is really hard stuff,” he said.

Rambus was already prepping for HBM3 deployments two years ago—before the standard was even finalized—and recently introduced a new HBM3 memory controller IP specifically aimed at boosting AI performance.

Handy added that the only vendors who choose their HBM vendors are the processor vendors. “Nvidia chooses whether they’re going to use Samsung or SK Hynix or Micron.” He said this is why getting qualified first matters—it’s an expensive deal, with different HBM chips having their subtle little differences.

“Once they’ve designed in something with somebody, they’re not going to be eager to go through that effort again to design in a second source,” Handy said.

AI has made the HBM market bigger, he added, noting that the Nvidia division responsible for AI is ballooning, which implies the company is selling a lot of hardware. At the same time, prices have gone up. “There’s a lot of growth, but it’s not a huge market.”

Like HBM, Handy said the bulk of GDDR output is going to be used for AI. Even the DDR5 market growth is going to be influenced by AI applications, he said, with vendors hoping it will make up for some of the recent slowness in the overall server market.

DDR5 has been gaining traction overall—Handy said it’s been in PCs for well over a year, as well as being rolled out in servers. Adoption is “gated” by processors, as those that take DDR5 don’t take DDR4 and vice versa.

Not all of Samsung’s announcements were AI-oriented. It is also looking to get more performance out of the PC with the introduction of what the company said is the industry’s first Low Power Compression Attached Memory Module (LPCAMM) form factor. The immediate market is in PCs and laptops and potentially even data centers, Samsung said.

An LPCAMM is meant to replace the LPDDR DRAM, which is permanently attached to the motherboard, or DDR-based So-DIMMs conventionally used in PCs and laptops, which can be attached or detached easily but have limitations with performance and other physical features, including size.

Because it’s detachable, an LPCAMM offers flexibility for computer makers during the production process, Samsung said, while also occupying 60% less space on the motherboard, which allows efficient use of devices’ internal space while also improving performance by up to 50% and power efficiency by up to 70%. These features make it a viable alternative to LPDDR, which does have power-saving features but creates operational difficulties, including the need to replace the entire motherboard when upgrading DRAM.

Handy said PC maker Dell has been championing this concept for some time. “What it’s doing basically is it’s getting rid of the connectors that DIMMs go into.” These connectors slow down the DRAM signals, he said. “The goal of CAM is to accelerate the connectors so that they don’t detract from the speed that the DRAM can provide.”

Handy said he’s not sure if the CAM will catch on but does view it as a good idea for getting more speed out of memory modules.

Micron Technology recently made some memory-related announcements aimed at data centers and PCs with the introduction of its high-speed 7,200-MT/s DDR5 memory on its 1-ß process node technology. The new DDR5 memory features advanced high-k CMOS device technology, four-phase clocking and clock sync, as well as a 50% improvement in performance and 33% improvement in performance per watt over the company’s previous generation.

Micron said in a news release that the increase in CPU counts to meet the demands of data center workloads requires higher memory bandwidth and capacities to overcome the “memory wall” challenge and that its 1ß DDR5 DRAM enables computational capabilities to scale with higher performance, enabling applications like AI training and inference, generative AI, data analytics and in-memory databases (IMDB) across data center and client platforms.

Augmented reality is also getting some attention from Micron with its low-power double data rate 5X (LPDDR5X) DRAM and Universal Flash Storage (UFS) 3.1 embedded solutions aimed at metaverse applications that have been qualified on Qualcomm Technologies’ latest extended-reality platform, Snapdragon XR2 Gen 2 Platform, developed in collaboration with Meta. The company said the memory offering provides the speed, performance and low power consumption in the small form factors required for use in untethered mixed-reality and virtual-reality devices using Micron’s 1-a process node technology and JEDEC power advancements.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved