Friday, July 19, 2019
When you need something faster than DRAM but can’t justify high-bandwidth memory (HBM), GDDR is just right.
As an incumbent memory historically used primarily in graphics cards for high-end PCs, particularly those for gamers, the last couple of years have seen GDDR technology hit a “Goldilocks zone” of sorts with uptake in emerging use cases such as artificial intelligence (AI), autonomous vehicles, and 5G networking — all of which need speed and high performance.
Shane Rau, research vice president, computing semiconductors at IDC, said GDDR has always been a preceding derivative of the next mainstream memory standard, but now that we’re at GDDR6, there are signs the technology is proliferating somewhat compared to previous iterations. HBM still remains too expensive, “whereas GDDR6, being a derivative of mainstream memory in speed and cost ... has started to fill an expanding pan of solutions.”
Another change in recent years is that although graphics is still the primary driver for GDDR improvements, the PC is becoming less of a driver of innovation as it’s adopted for networking, AI, and other emerging use cases. “I see this as somewhat of a part of a democratization of memory specialization, if you will,” said Rau. “I see GDDR6 as growing from its niche in graphics to more mainstream for those applications that need a little extra burst of speed at a reasonable price than what the commodity PC-oriented DRAM could provide.”
AI, he said, is an excellent example of how GDDR6 is being used with a GPU to have a sufficiently fast memory to process workloads because DRAM isn’t fast enough or consumes too much power, even though it’s a little more expensive. However, said Rau, getting the benefits of GDDR6 is dependent on the overall solution. “We’re in a market environment now where you have so many chip types, so many data processors that can use memory — CPU, GPU, FPGA, and microcontrollers. The market is searching for the right mix of these processors and accelerators that use memory.”
Having the right interfaces and interconnects is key to getting the most from a high-performance memory such as GDDR6. Frank Ferro, senior director of product marketing of IP cores at Rambus, recalls the company being founded during a cycle where memory bandwidth was the primary limitation whereas there have been cycles where compute was the bottleneck. “We’re back to where the memories have become the bottleneck again.”
Back in 2015, HBM was a great solution to the pressing memory bandwidth challenges, said Ferro, because it provides the highest bandwidth and the fullest power, but it’s also expensive and outside of the mainstream in terms of manufacturing. Since then, GDDR has emerged as a viable alternative for a lot of applications. While HBM continues to be the first choice for high performance computing as well as some AI training applications, GDDR is proving to be a good option for AI inference. “You don’t need all of the storage and quite as much bandwidth as the training.”
Similarly, GDDR is a good candidate for autonomous driving applications because of the necessary bandwidth and speed needed to react to objects discerned by the sensors on the vehicle. Ferro said GDDR6 became an option because memory makers are getting their technology qualified for automotive as well as networking applications (as base stations have evolved and are now doing a lot of in-line computing, not just moving data along).
For Rambus customers, it’s about tradeoffs among many parameters (including cost, power, and bandwidth) as they look at the boxes they’re selling, said Ferro. “They’re trying to look at all the different memory solutions that best fit a box. In some cases, it may get three out of the four, [or] four out of the five parameters they’re looking for. There are always tradeoffs to be made.”
Samsung, which does offer the more expensive HBM option for customers with particularly intensive workloads, is also seeing more extensive use of GDDR technology said, Tien Shiah, senior manager, memory marketing, at Samsung Semiconductor, Inc. “There are applications that just require the best performance and from that standpoint they are certainly looking at HBM to address that.” While there’s no definitive line between the two, he said, some of the higher-end applications like AI training, where customers are looking for the fastest training times and the most accurate trained models, need to use the highest form of memory that gives you the highest bandwidth to achieve that. “In those cases, the differences in memory cost is less of a factor because the overall system is very high value.”
But despite all the new use cases across AI, automotive, and networking, it’s the graphics and gaming applications that are still driving innovation around GDDR. “With the new generations of GDDR, we’re looking for higher bandwidth and greater power efficiency,” said Shiah. “The latest graphics cards always looked for better performance. They’re trying to drive additional teraflops and higher frame rates, and higher resolutions. All that requires faster memory. And with graphics cards, the power consumption is always a factor to look at.”
These new use cases aren’t putting pressure on supply either, said Shiah, as traditionally, graphics cards and gaming have been the main market for GDDR memory, while applications such as AI inference and some of the newer automotive displays and autonomous driving are still only emerging applications. They may be large segments going forward, he said, but currently, graphics and gaming is what is driving a lot of volume.
One GDDR market that had spiked was cryptocurrency mining, which briefly competed with gamers for supply, noted IDC’s Rau, but that segment nose-dived last year. What could transform the market in the longer term is the emerging use cases — since historically, graphic card makers such as Nvidia and AMD pretty much created the GDDR market. “New use cases could change how it’s sold.”
From a technology road map perspective, the industry is in transition from GDDR5 to GDDR6 at a large scale, said Shiah, with GDDR6 essentially doubling to 16 gigabits per second per pin compared to its predecessor. “We doubled the bandwidth per package. And we’ve also introduced a 16-gigabit density device, so we’ve doubled the density and doubled the speed, even with GDDR5 being on the market for a relatively short time,” he said. “We believe there’s enough head room for a while with GDDR6.”
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|