AMD Dives Deep On High Bandwidth Memory - What Will HBM Bring AMD?
by Ryan Smith on May 19, 2015 8:40 AM ESTHistory: Where GDDR5 Reaches Its Limits
To really understand HBM we’d have to go all the way back to the first computer memory interfaces, but in the interest of expediency and sanity, we’ll condense that lesson down to the following. The history of computer and memory interfaces is a consistent cycle of moving between wide parallel interfaces and fast serial interfaces. Serial ports and parallel ports, USB 2.0 and USB 3.1 (Type-C), SDRAM and RDRAM, there is a continual process of developing faster interfaces, then developing wider interfaces, and switching back and forth between them as conditions call for.
So far in the race for PC memory, the pendulum has swung far in the direction of serial interfaces. Though 4 generations of GDDR, memory designers have continued to ramp up clockspeeds in order to increase available memory bandwidth, culminating in GDDR5 and its blistering 7Gbps+ per pin data rate. GDDR5 in turn has been with us on the high-end for almost 7 years now, longer than any previous memory technology, and in the process has gone farther and faster than initially planned.
But in the cycle of interfaces, the pendulum has finally reached its apex for serial interfaces when it comes to GDDR5. Back in 2011 at an AMD video card launch I asked then-graphics CTO Eric Demers about what happens after GDDR5, and while he expected GDDR5 to continue on for some time, it was also clear that GDDR5 was approaching its limits. High speed buses bring with them a number of engineering challenges, and while there is still headroom left on the table to do even better, the question arises of whether it’s worth it.
AMD 2011 Technical Forum and Exhibition
The short answer in the minds of the GPU community is no. GDDR5-like memories could be pushed farther, both with existing GDDR5 and theoretical differential I/O based memories (think USB/PCIe buses, but for memory), however doing so would come at the cost of great power consumption. In fact even existing GDDR5 implementations already draw quite a bit of power; thanks to the complicated clocking mechanisms of GDDR5, a lot of memory power is spent merely on distributing and maintaining GDDR5’s high clockspeeds. Any future GDDR5-like technology would only ratchet up the problem, along with introducing new complexities such as a need to add more logic to memory chips, a somewhat painful combination as logic and dense memory are difficult to fab together.
The current GDDR5 power consumption situation is such that by AMD’s estimate 15-20% of Radeon R9 290X’s (250W TDP) power consumption is for memory. This being even after the company went with a wider, slower 512-bit GDDR5 memory bus clocked at 5GHz as to better contain power consumption. So using a further, faster, higher power drain memory standard would only serve to exacerbate that problem.
All the while power consumption for consumer devices has been on a downward slope as consumers (and engineers) have made power consumption an increasingly important issue. The mobile space, with its fixed battery capacity, is of course the prime example, but even in the PC space power consumption for CPUs and GPUs has peaked and since come down some. The trend is towards more energy efficient devices – the idle power consumption of a 2005 high-end GPU would be intolerable in 2015 – and that throws yet another wrench into faster serial memory technologies, as power consumption would be going up exactly at the same time as overall power consumption is expected to come down, and individual devices get lower power limits to work with as a result.
Finally, coupled with all of the above has been issues with scalability. We’ll get into this more when discussing the benefits of HBM, but in a nutshell GDDR5 also ends up taking a lot of space, especially when we’re talking about 384-bit and 512-bit configurations for current high-end video cards. At a time when everything is getting smaller, there is also a need to further miniaturize memory, something that GDDR5 and potential derivatives wouldn’t be well suited to resolve.
The end result is that in the GPU memory space, the pendulum has started to swing back towards parallel memory interfaces. GDDR5 has been taken to the point where going any further would be increasingly inefficient, leading to researchers and engineers looking for a wider next-generation memory interface. This is what has led them to HBM.
163 Comments
View All Comments
chizow - Wednesday, May 20, 2015 - link
Wouldn't be the first time David Kanter was wrong, certainly won't be the last. Still waiting for him to recant his nonsense article about PhysX lacking SSE and only supporting x87. But I guess that's why he's David Kanter and not David ReKanter.Poisoner - Friday, June 12, 2015 - link
You're just making up stuff. No way Fiji is just two Tonga chips stuck together. My guess is your identity is wrapped up in nVidia so you need to spread fud.close - Tuesday, May 19, 2015 - link
That will be motivation enough to really improve on the chip for the next generation(s), not just rebrand it. Because to be honest very, very few people need 6 or 8GB on a consumer card today. It's so prohibitively expensive that you'd just have an experiment like the $3000 (now just $1600) 12GB Titan Z.The fact that a select few can or would buy such a graphics card doesn't justify the costs that go into building such a chip, costs that would trickle down into the mainstream. No point in asking 99% of potential buyers to pay more to cover the development of features they'd never use. Like a wider bus, a denser interposer, or whatever else is involved in doubling the possible amount of memory.
chizow - Tuesday, May 19, 2015 - link
Idk, I do think 6 and 8GB will be the sweet spot for any "high-end" card. 4GB will certainly be good for 1080p, but if you want to run 1440p or higher and have the GPU grunt to push it, that will feel restrictive, imo.As for the expense, I agree its a little bit crazy how much RAM they are packing on these parts. 4GB on the 970 I thought was pretty crazy at $330 when it launched, but now AMD is forced to sell their custom 8GB 290X for only around $350-360 and there's more recent rumors that Hawaii is going to be rebranded again for R9 300 desktop with a standard 8GB. How much are they going to ask for it is the question, because that's a lot of RAM to put on a card that sells for maybe $400 tops.
silverblue - Tuesday, May 19, 2015 - link
...plus the extra 30-ish watts of power just for having that extra 4GB. I can see why higher capacity cards had slightly nerfed clock speeds.przemo_li - Thursday, May 21, 2015 - link
VR.It require 90Hz 1090p x2 if one assume graphics same as current get non-VR graphics!
That is lots of data to push to and from GPU.
robinspi - Tuesday, May 19, 2015 - link
Wrong. They will be using a dual link interposer, making it instead of 4GB hi it will be 8GB hi-hi. Read more on WCCFTech:http://wccftech.com/amd-radeon-r9-390x-fiji-xt-8-h...
HighTech4US - Tuesday, May 19, 2015 - link
Wrong.4GB first, 8GB to follow (on dual GPU card)
http://www.fudzilla.com/news/graphics/37790-amd-fi...
chizow - Tuesday, May 19, 2015 - link
Wow lol. That 4GB rumor again. And that X2 rumor again. And $849 price tag for just the single GPU version???! I guess AMD is looking to be rewarded for their efforts with HBM and hitting that ultra-premium tier? I wonder if the market will respond at that asking price if the single-GPU card does only have 4GB.przemo_li - Thursday, May 21, 2015 - link
Artificial number like X GB, wont matter.Artificial number like YZW FPS in games S,X,E will ;)
Do note that Nvidia need to pack lots of GB just for wide bus effect!
It works for them, but games do not require 12GB now, nor in short term future (-- no consoles!)