Thursday, May 21, 2015
We heard AMD tease recently, that their upcoming HBM (High Bandwidth Memory) technology will be employed on their next generation Radeon graphics cards. HBM is a stacked DRAM approach that promises a major boost in bandwidth (performance), along with a reduction in circuit board area real estate consumption as well as significantly lower power consumption. In the world of computing technologies and consumer electronics, a gain in any one of these areas, can so often come at the expense of the other. However, with AMD claiming HBM offering all three intrinsically, the company could be lining up a game-changing technology the likes of which it, as a company, has been thirsting for, for a long time now.
Stacked DRAM technologies aren’t new, but with the information AMD made available today, we learned that HBM is a totally new approach to implementing graphics frame buffer memory and at first glance it looks very promising.
In short, what HBM does is allow the DDR DRAM die to be located much closer to the GPU core but doesn’t take a silicon integration approach, which isn’t ideal for logic-optimized designs like modern graphics processors (GPUs) and CPUs. Instead AMD, in conjunction with industry partners like ASE, Amkor and UMC developed something called an “Interposer.” This is essentially another connection layer that connects all HBM memories and their associated interface logic, to the GPU, CPU or SoC (System on Chip) processing engine.
AMD also worked with Hynix and other industry partners to develop the HBM memory standard which is now an adopted JEDEC specification technology. And HBM memory itself is where all the magic happens. HBM memory is a stacked DRAM approach that places up to 4 die on top of one another, with a bottom fifth layer of base die memory interface logic.
Each DRAM die is interconnected to the one below it and ultimately to the base logic die with “Through-Silicon Vias” (TVS) and micro-bumps (µBumps). This allows for currently up to 1GB of memory per HBM stack. The game-changing part comes in by the way of the HBM interface width, which is 1024 bits wide versus current gen GDDR5 which is only 32-bits wide. As a result, you can push a ton more data over the HBM memory interface, while scaling clock speed back dramatically and still achieve major gains in available bandwidth. In round numbers, GDDR5 memory offers up to 28GB/s per chip, while an HBM stack will offer over 100Gb/s per stack. And due to the fact that you can run the clock so much lower (500MHz in the above example versus 1.75GHz for GDDR5), as well as run HBM at a lower 1.3 volts (versus 1.5V for GDDR5), power consumption can be reduced by up to 50 percent. This all equates to roughly a 3X performance-per-watt gain over GDDR5.
Finally, at the risk of sounding a bit “infomercial” but wait, there’s more. Since HBM is a stacked memory technology and you can stack 4GB of frame buffer memory onto a single GPU package substrate, rather than having to populate it down on the PCB around the GPU, again, there is a massive board real estate reduction with HBM, which always helps cost and design complexity. 1GB of GDDR5 memory (comprised of four 256MB chips), requires roughly 672mm2 of PCB to populate. Because it’s vertically stacked, and the die are actually smaller than the average GDDR5 chip, that same 1GB of frame buffer requires only about 35mm2 of board area.
All told, AMD’s new advancement in HBM technology looks to be a major win for the company as it begins to employ the technology in future graphics cards designs that are coming to market as early as this summer, as well as future low power APU designs that will come to notebooks and other devices later down the road.
This one major advancement could quickly put the company right back in the driver’s seat when it comes to leading-edge GPU solutions versus their primary competitor, NVIDIA, as well a play very favorably for them versus Intel in other markets.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|