Thursday, December 19, 2024
The data center infrastructure market has seen particularly rapid growth thanks to the need to support the compute demands of AI and machine learning ML. According to Chris Koopmans, chief operations officer of Marvell, there has been over $100 billion of capex spent by hyperscalers in 2024—and as a result, in its 2025 Q3 quarterly earnings (the three months ended Nov. 2), Marvell crossed a landmark $1 billion in revenue from interconnects and custom silicon for the data center market, representing a 98% year-on-year-growth.
Speaking at the Marvell industry analyst day in Santa Clara, Calif., last week, Koopmans said, “Marvell is now a data center company, representing over 70 percent of our revenue.”
He added that data centers were now Marvell’s biggest opportunity, with custom silicon delivering revenue already. Adding to this, Loi Nguyen, who heads the company’s cloud optics group, said, “Hyperscale companies are needing to differentiate their service, and as a result, more and more hyperscale providers will build their own custom AI infrastructure. Marvell is the next hot AI chip [firm] in town.”
While this is a big opportunity, Marvell CEO Matt Murphy added that, “We are still very early in the AI innovation cycle.”
A big part of Marvell’s message during the analyst day presentations was the need for custom compute, storage and connectivity for AI data centers. In this respect, the company announced a new custom high-bandwidth memory (HBM) compute architecture in conjunction with Micron, Samsung Electronics and SK Hynix to achieve greater compute and memory density.
Will Chu, senior VP and general manager of the custom compute and storage group at Marvell, said, “Today HBM is a bottleneck in many [AI data center] applications.” Hence, leading cloud data center operators are scaling with custom infrastructure and enhancing their XPUs by tailoring HBM for specific performance, power and total cost of ownership (TCO), which will potentially address the way in which AI accelerators are designed and delivered.
Putting this into context, HBM is a key component integrated within the XPU using advanced 2.5D packaging technology and high-speed, industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. As a result, Marvell said its custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size and cost for specific XPU designs.
This approach considers the compute silicon, HBM stacks and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell said it is advancing customization in cloud data center infrastructure, collaborating with HBM makers to implement this new architecture and meet cloud data center operators’ needs.
The Marvell custom HBM compute architecture is said to enhance XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies. This results in greater performance and up to 70% lower interface power compared to standard HBM interfaces. The optimized interfaces also reduce the required silicon real estate in each die, allowing HBM support logic to be integrated onto the base die.
These real-estate savings, up to 25% according to Marvell, can be used to enhance compute capabilities, add new features and support up to 33% more HBM stacks, increasing memory capacity per XPU. These improvements boost XPU performance and power efficiency while lowering TCO for cloud operators.
In a panel discussion at the Marvell analyst day, Indong Kim, a VP for Samsung Semiconductor, said, “There’s a significant ask from customers in terms of optimizing PPA [power, performance and area]. So, customization is inevitable.”
In that same panel, Sunny Kang, VP of DRAM technology at SK Hynix America, added that moving to custom HBM was inevitable in order to help customers produce an optimized solution for their workloads and infrastructure.
In Marvell’s announcement of the custom HBM architecture, analyst Patrick Moorhead, CEO and founder of Moor Insights & Strategy, said, “Custom XPUs deliver superior performance and performance per watt compared to merchant, general-purpose solutions for specific, cloud-unique workloads. Marvell, already a player in custom compute silicon, is already delivering tailored solutions to leading cloud companies. Their latest custom compute HBM architecture platform provides an additional lever to enhance the TCO for custom silicon. Through strategic collaboration with leading memory makers, Marvell is poised to empower cloud operators in scaling their XPUs and accelerated infrastructure, thereby paving the way for them to enable the future of AI.”
During the analyst day, we spoke to Sandeep Bharathi, EVP and chief development officer for Marvell, as well as Will Chu, to explain what custom acceleration means at Marvell, and how their customers are looking to differentiate through customization of NICs, CPUs, XPUs and now custom HBM.
In another interview at the Marvell analyst day, we spoke to Mark Kuemerle, VP technology and CTO of custom Solutions, along with Wolfgang Sauter, a distinguished engineer at Marvell, to talk about the broader issue of silicon scaling and packaging scaling, and how the distinction between the two is getting less distinct. Sauter said, “Can’t just talk about chip design and IP now; packaging is now about 50% of the conversation.”
So, with data rates increasing, that becomes more of a challenge for anything that goes through the package, and anything that moves the data.
A good summary of the sentiment at the Marvell industry analyst day came from Raghib Hussain, president of products and technologies for Marvell. “The future is custom,” he said. He continued to explain that there was a race to optimize infrastructure in order to scale, differentiate and present unique use case opportunities for data center infrastructure customers. On the custom HBM, Kuemerle added, “By building custom HBM, we can help shrink the chip and get more super performance.”
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|