Monday, February 16, 2026
LPDDR6X is an advanced version of LPDDR6 memory and will further expand the capabilities of the DRAM standard. Although the specifications have not been finalized yet for LP6X memory by JEDEC, we expect more information to come this year.
Coming back to the story, Samsung's LPDDR6X samples have been sent out to Qualcomm, who are likely going to use them in their AI Accelerator chip called the "AI250". This will be the follow-up to this year's AI200, which also leverages the LPDDR memory standard. Both chips are going to tackle AI inferencing workloads & are very similar to Intel's Crescent Island GPUs based on the Xe3P chips in the sense that both utilize the LPDDR standard.
While NVIDIA, AMD, Huawei, and other major data center chipmakers leverage HBM memory, those DRAM standards are costly, require more power, and there's an ongoing DRAM shortage. HBM is way harder to produce versus DDR due to its packaging, validation, and testing requirements. LPDDR DRAM saves the hassle and provides a lower cost. HBM is definitely way faster, but given the lower cost of LPDDR, it makes sense for cost-effective AI solutions. As such, Qualcomm is expected to feature up to 768 GB of LPDDR memory on its AI200, and the AI250 chip is expected to carry LPDDR6X capacities beyond 1 TB.
But as of right now, LPDDR6X memory is still a few years away. We can realistically expect the standard around late 2027 or early 2028.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|