Thursday, May 16, 2024
Google parent Alphabet (GOOGL.O) on Tuesday unveiled a product called Trillium in its artificial intelligence data center chip family that it says is nearly five times as fast as its prior version.
"Industry demand for (machine learning) computer has grown by a factor of 1 million in the last six years, roughly increasing 10-fold every year," Alphabet CEO Sundar Pichai said in a briefing call with reporters. "I think Google was built for this moment, we've been pioneering (AI chips) for more than a decade."
Alphabet's effort to build custom chips for AI data centers represents one of the few viable alternatives to Nvidia's (NVDA.O) top-of-the-line processors that dominate the market. Together with the software that is closely tied to Google's tensor processing units (TPUs), the chips have allowed the company to take a significant share of the market.
Nvidia commands roughly 80% of the AI data center chip market, and the vast majority of the remaining 20% is various versions of Google's TPUs. The company doesn't sell the chip itself, but rents access through its cloud computing platform.
The sixth-generation Trillium chip will achieve 4.7 times better computing performance compared with the TPU v5e, according to Google, a chip designed to power the tech that generates text and other media from large models. The Trillium processor is 67% more energy efficient than the v5e.
The new chip will be available to its cloud customers in "late 2024," the company said.
Google's engineers achieved additional performance gains by increasing the amount of high-bandwidth memory capacity and overall bandwidth. AI models require enormous amounts of advanced memory, which has been a bottleneck to further boosting performance.
The company designed the chips to be deployed in pods of 256 chips that can be scaled to hundreds of pods.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|