Home
News
Products
Corporate
Contact
 
Wednesday, March 18, 2026

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Samsung's HBM4E memory offers 16 Gbps I/O


Wednesday, March 18, 2026

Samsung's HBM4E Memory has been showcased at GTC 2026, offering some really fast speeds such as 16 Gbps for IO, providing up to 4 TB/s bandwidth per stack, and 16-Hi stacks for a per-stack memory density of 48 GB. The next-gen solution is precisely designed for NVIDIA's Rubin Ultra platform, which will take a current-gen Rubin chip and essentially double it, with four GPU chiplets and 16 HBM memory sites.

Based on that, we can already expect some insane memory capabilities for Rubin Ultra, with up to 384 GB of HBM4E capacities if 16-Hi stacks are used, and up to 64 TB/s of bandwidth, if the speeds are set to 16 Gbps. These are some insane figures compared to Rubin, which offers 288 GB of HBM4 memory and up to 22 TB/s of bandwidth.

The centerpiece of Samsung’s showcase at NVIDIA GTC 2026 will be the new sixth-generation HBM4, which is now in mass production and is designed for the NVIDIA Vera Rubin platform. Samsung’s HBM4 is expected to help accelerate the development of future AI applications, delivering consistent processing speeds of 11.7 gigabits-per-second (Gbps), which exceeds the industry standard of 8Gbps, and can be enhanced to 13Gbps.

By leveraging the most advanced sixth-generation 10-nanometer (nm)-class DRAM process (1c), Samsung has achieved stable yields and industry-leading performance. The company’s next-generation HBM4E, delivering 16Gbps per pin and 4.0 terabytes-per-second (TB/s) bandwidth, will be on display as well for the first time at GTC 2026.

Visitors will also be able to catch a glimpse of Samsung’s hybrid copper bonding (HCB) technology, a new method that will enable next-generation HBM to achieve 16 or more layers while reducing heat resistance by more than 20 percent, compared to thermal compression bonding (TCB).

An Alliance Taking the AI Era to the Next Level

The strong collaboration between Samsung and NVIDIA will be highlighted in the booth’s separate ‘NVIDIA Gallery,’ specifically featuring a broad lineup of Samsung’s cutting-edge technologies, such as HBM4, SOCAMM2, and PM1763 SSD, that are designed for NVIDIA AI infrastructure.

Addressing the need for maximum efficiency and scalability in AI systems, Samsung’s SOCAMM2, based on low-power DRAM, is an optimum server memory module that offers high bandwidth and flexible system integration for next-generation AI infrastructure. Samsung’s SOCAMM2 is currently in mass production, an industry-first to reach that milestone.

Designed for next-generation AI storage solutions, Samsung’s PM1763 SSD is based on the latest PCIe 6.0 interface, offering fast data transfers and high capacities. The industry-leading PM1763 performance will be demonstrated on servers working with the NVIDIA SCADA programming model.

As part of the new NVIDIA BlueField-4 STX reference architecture for accelerated storage infrastructure in NVIDIA’s Vera Rubin platform, Samsung’s PM1753 SSD will show how it helps to enhance energy efficiency and system performance for inference workloads.

Efficient Memory for Local Intelligence Samsung’s memory solutions also offer maximized efficiency for local AI workloads on personal devices. During GTC 2026, Samsung will showcase tailored and efficient solutions for personal AI supercomputers, including the Samsung PM9E3 and PM9E1 NAND for NVIDIA DGX Spark.

Additionally, Samsung will display DRAM solutions, LPDDR5X and LPDDR6, that are designed for seamless integration into premium smartphones, tablets, and wearable devices, offering faster data throughput and lower latency. LPDDR5X delivers speeds of up to?25Gbps per pin, while cutting power consumption by up to?15?percent, enabling ultra-responsive mobile experiences, high-resolution gaming, and AI-enhanced applications without sacrificing battery life.

Building on that foundation, LPDDR6 pushes bandwidth further to a scalable?30-35?Gbps per pin and introduces advanced power-management features such as adaptive voltage scaling and dynamic refresh control, which together provide the performance needed for next-generation edge-AI workloads.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved