Tuesday, June 25, 2024
TSMC and contract chip design partner Global Unichip have won the bulk orders for base dies used with SK hynix's next-generation HBM4 memory chips.
a new report from UDN, we're learning that after being the exclusive OEM for AI chips for technology giants like NVIDIA and AMD, TSMC is seizing another AI business opportunity. UDN reports that TSMC will collaborate with its special application IC (ASIC) design service factory Creative, now successfully developing the HBM key peripheral components required for AI servers, jointly inking a "large order" for base dies required for next-gen HBM4 memory.
report says that TSMC and Creative have "never commented on order developments," but the legal source of the site said that demand for AI is strong, and not just HPC-related chips, but HBM demand is "growing rapidly" and is "becoming a new business opportunity in the market" which has three major memory chip companies including SK hynix, Samsung, and Micron, investing as much as they can into their respective HBM supply chains.
HBM3 and HBM3E memory chips are completely sold out into 2025, but with next-gen 3nm-based AI chips coming in 2025, the existing HBM3 and HBM3E memory "may be unable to exert its maximum computing power due to capacity and speed limitations". SK hynix, Samsung, and Micron are all increasing investments into HBM, with next-generation HBM4 being developed right now, with mass production aimed at 2025, and mass shipments in 2026.
SK hynix, Samsung, and Micron are all developing next-gen HBM4 memory, with the semiconductor standardization organization -- JEDEC Solid State Technology Association -- is also busy working on new standards for HBM4. We heard about that recently, where JEDEC is relaxing the stacking height limitation of HBM4 to 775 microns.
The industry thinks that the biggest change with HBM4 isn't just the increase in stack height to 16-layer DRAM stacks, but also the required logic ICs at the bottom of the HBM in order to boost bandwidth transmission speeds. UDN reports that this is the "biggest change" in the new generation of HBM4, and it could be the reason that JEDEC is relaxing the stack height limit, with one of the other reasons being the crucial logic IC is the basic interface chip.
SK hynix is working directly with TSMC on next-gen HBM4 and advanced packaging solutions, with Creative recently winning SK hynix's key basic interface chip commissioned design order for next-gen HBM4. The design is expected to be finalized sometime in 2025, with TSMC using its 12nm and 5nm process nodes depending on high performance, or lower power consumption.
SK hynix is reportedly willing to release basic interface chip orders to Creative and TSMC, mostly because more than 90% of the CoWoS advanced packaging market for HPC chips is dominated by TSMC.
NVIDIA's current-gen Hopper GPU architecture with the H100 and beefed-up H200 are already dominating, with the new Blackwell GPU architecture with the B100 and B200 AI GPUs sporting faster HBM3E memory. But, then NVIDIA teased its next-gen Rubin R100 AI GPU, which will feature HBM4 memory and drop in Q4 2025.
An industry insider told Business Korea: "SK Hynix is speeding up the implementation of its next-generation HBM roadmap, including HBM4 and HBM4E. The company has moved up its plans to mass-produce HBM4 in 2024 and HBM4E in 2026 by one year each".
The next-generation HBM4 offers a huge 40% increase in bandwidth, and a reduced power consumption of a rather incredible 70% to HBM3E, the fastest memory in the world. HBM4 density will be 1.3x higher, with all of these advancements combined, the leap in performance and efficiency is a key driver in NVIDIA's continued AI GPU dominance.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|