Home
News
Products
Corporate
Contact
 
Monday, December 23, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Major CSPs Boosting Demand for AI Chips, HBM


Friday, June 30, 2023

The explosive growth in generative AI applications like chatbots has spurred significant expansion in AI server development in 2023, according to TrendForce. Major CSPs including Microsoft, Google, AWS, as well as Chinese enterprises like Baidu and ByteDance, have invested heavily in high-end AI servers to continuously train and optimize their AI models. This reliance on high-end AI servers necessitates the use of high-end AI chips, which in turn will not only drive up demand for high bandwidth memory (HBM) during 2023 to 2024, but is also expected to boost growth in advanced packaging capacity by around 30 to 40% in 2024.

TrendForce highlights that to augment the computational efficiency of AI servers and enhance memory transmission bandwidth, leading AI chip makers such as Nvidia, AMD, and Intel have opted to incorporate HBM. Presently, Nvidia’s A100 and H100 chips each boast up to 80GB of HBM2e and HBM3. In its latest integrated CPU and GPU, the Grace Hopper Superchip, Nvidia expanded a single chip’s HBM capacity by 20%, hitting a mark of 96GB. AMD’s MI300 also uses HBM3, with the MI300A capacity remaining at 128GB like its predecessor, while the more advanced MI300X has ramped up to 192GB, marking a 50% increase. Google is expected to broaden its partnership with Broadcom in late 2023 to produce the AISC AI accelerator chip TPU, which will also incorporate HBM memory, in order to extend AI infrastructure.

TrendForce predicts that AI accelerator chips that primarily utilize HBM (including Nvidia’s H100 and A100, AMD’s MI200 and MI300, and Google’s TPU) will have a total HBM capacity of 290 million GB in 2023—a nearly 60% growth rate. This momentum is projected to sustain at a rate of 30% or more into 2024.

Additionally, there is escalating demand for advanced packaging technology in AI and HPC domains, with TSMC’s Chip on Wafer on Substrate (CoWoS) being the predominant choice for AI server chips. CoWoS packaging technology involves two stages: CoW and oS. The CoW phase merges various Logic ICs (CPUs, GPUs, and ASICs) and HBM memory; the oS stage uses solder bumps to connect the CoW assembly and attach it onto a substrate. This is subsequently integrated into the PCBA, forming the principal computational unit of the server’s motherboard, thereby creating a comprehensive AI server system with other components such as networks, storage, power supply units (PSUs), and other I/O.

TrendForce notes that due to the robust demand for high-end AI chips and HBM, TSMC’s monthly CoWoS capacity is projected to hit 12K by the end of 2023. The demand for CoWoS has surged by almost 50% since the onset of 2023, driven by the needs of Nvidia’s A100 and H100 and associated AI servers. CoWoS capacity is likely to be strained in 2H23 as demand for high-end AI chips from AMD, Google, and other companies intensifies. This strong demand is anticipated to continue into 2024, with a projected growth of 30–40% in advanced packaging capacity, given the readiness of related equipment.

TrendForce underscores that in light of the urgent demand for AI, whether it’s producing HBM or CoWoS, monitoring of support technologies is critical. This includes considering when technologies like Through Silicon Via (TSV) packaging, Interposer circuit boards, and related equipment (i.e., wet process equipment) are implemented, along with factoring in lead times. Should strong demand for AI persist, it’s plausible that Nvidia may explore other advanced packaging alternatives for the CoWoS process—such as those provided by Amkor or Samsung—to address potential supply shortages.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved