Home
News
Products
Corporate
Contact
 
Saturday, November 23, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

SK hynix Rolls Out High-Bandwidth Memory Roadmap


Thursday, August 1, 2024

SK hynix has rolled out a roadmap indicating that the company will continue to dominate production of high-bandwidth memory (HBM) that is indispensable for AI. The lead that the company has gained over rival memory makers Samsung and Micron will face stiffer competition, industry experts told EE Times.

SK hynix said at an industry event last May that it may be the first to introduce next-generation HBM4 in 2025. At that event, the company showed a presentation slide of two HBM3E modules packaged inside Nvidia’s Grace Hopper GH200 GPU (see illustrations). SK hynix SVP Ilsup Jin, who leads the company’s DRAM and NAND tech development, said at ITF World in Antwerp last month that the company’s next-generation HBM4 may be available ahead of expectations.

“HBM4 is coming pretty quick,” Jin said. “It’s coming next year.”

SK hynix has the leading HBM market share, with over 85% in HBM3 and above 70% in overall HBM, SemiAnalysis Chief Analyst Dylan Patel told EE Times earlier this year. The competition is expected to get stronger, according to Sri Samavedam, SVP of CMOS technologies at global R&D organization imec.

“SK hynix was an early adopter, and they got out ahead,” Samavedam told EE Times. “Micron is not far behind. They came out with some really competitive HBM offerings last year and an HBM3E offering this year, as well.”

In February, Micron announced commercial production of its HBM3E, which will be part of Nvidia’s H200 Tensor Core GPUs set to ship in the second quarter of 2024.

Advanced packaging is critical to the wider adoption of HBM. Samsung will offer three-dimensional (3D) packaging services for HBM this year, followed by its own HBM4 in 2025, according to a report in the Korean Economic Daily.

Samsung declined to comment on the report.

Instead, the company told EE Times it plans to introduce HBM3E 12-layer products by Q2 2024. Samsung pledged to strengthen its HBM supply capabilities and technological competitiveness.

The supply of HBM is a potential roadblock to the expansion of AI models and services.

“It’s become a problem,” Samavedam said. “There are only three DRAM manufacturers left in the world with SK hynix, Samsung and Micron. HBM needs advanced packaging, as well. You need interposers, and there are not many companies that can do that. Essentially, Taiwan Semiconductor Manufacturing Co. [TSMC] dominates the CoWoS packaging of the HBM and the interposer. Down the road, we hope there is a little bit more competition with maybe Intel Foundry stepping up in advanced packaging.”

In April, SK hynix signed an agreement with TSMC to develop and produce next-generation HBM and enhance integration of logic with HBM. At the time, SK hynix said it would proceed with the development of HBM4 for production starting in 2026—a year later than Jin’s new 2025 target for HBM4.

SK Hynix HBM diagram example

SK hynix plans to adopt TSMC’s advanced logic process for HBM4’s base die so additional functionality can be packed into limited space, helping SK hynix customize HBM for a wider range of performance and power efficiency requirements.

Intel said it has been in production of HBM products for some time.

“We collaborate with all the major HBM vendors, and advanced packaging is a key pillar of Intel Foundry’s system foundry approach,” Intel told EE Times. “We’re a little different than TSMC in that we’ll integrate tiles from any foundry, not just ours.”

Alternative memories

Alternatives that improve on HBM would cut energy consumption by putting memory even closer to processors.

The energy cost of data communication is quite high, growing exponentially with distance. The ideal is to put more memory as close to a processor as possible, according to Samavedam.

“If you are in the processor, there are these register files which are very local,” he said. “If you think of that as 1× the energy, and you go to an SRAM cache [on top of the processor] like a L2 or L3 cache, that’s about 100× in energy. If you have to go to HBM to fetch the data, that’s 500× the energy. If you can move memory very close to the processor, that’s always good from an energy perspective.”

The energy consumption of data centers that run AI has become an issue.

“There’s a further requirement to process more data, faster and with efficiency,” Jin said in his presentation at the event. “This is a key topic for the memory industry.”

He showed a slide that highlighted the environmental impact of increasing data usage that estimated the energy consumption of data centers worldwide at one trillion kWh per year, or four times the annual power consumption of South Korea.

AI applications require a lot of energy-consuming data movement back and forth, according to Samavedam.

“The models are getting much more complex,” he said. “You have parameters that are on the order of billions and over a trillion these days. That’s a lot. That needs to be stored in the memory area and accessed very frequently. Data access, data bandwidth, data capacity. You’ll hear this throughout the next decade or so with AI training and inference taking off.”

Jin looked forward to innovations like processor in memory.

“That will take some time because the ecosystem is not ready to adopt this new solution, but I bet this is going to be a very strong candidate for future AI or big data computation,” Jin said.

He mentioned 3D DRAM as another solution.

“3D DRAM is NAND-like stacked DRAM cells. Bit growth can be achieved by stacking more and more DRAM cells.”

More investment needed

Still, wafer bonding is necessary for stacking, and semiconductor tool makers will need to increase investment to make better equipment, Jin said.

“When you talk about really advanced packaging like [wafer] hybrid bonding, it requires extreme planarity before you bond,” Samavedam said. “It needs CMP [chemical mechanical planarization], cleaning. You have to do it at the wafer level in a clean room. The foundries are much better suited to do that. I expect Intel will be good, strong competition down the road for TSMC. I don’t see many OSATs [offshore assembly and test companies] picking this up.”

DRAM will transition to 3D DRAM eventually, according to Samavedam.

“There are different ways of doing 3D DRAM,” he said. “People are looking at deposited semiconductors like indium gallium zinc oxide. IGZO has a wide bandgap that’s very attractive for DRAM applications. Because it has a wide bandgap, it doesn’t leak as much. You don’t need to refresh the data as frequently.”

The alternative memories will take years to develop. HBM will, for the foreseeable future, be the main road to more bandwidth for high-performance compute, according to Samavedam.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved