AMD to Ramp up GPU Production, But RAM a Limiting Factor
by Ryan Smith on January 31, 2018 7:15 AM EST- Posted in
- GPUs
- AMD
- Cryptocurrency
- DRAM
- HBM2
- GDDR5
- The Winter of our Discontent
One of the more tricky issues revolving around the GPU shortages of the past several months has been the matter of how to address the problem on the GPU supply side of matters. While the crux of the problem has been a massive shift in demand driven by a spike in cryptocurrency prices, demand has also not tapered off like many of us would have hoped. And while I hesitate to define the current situation as the new normal, if demand isn’t going to wane then bringing video card prices back down to reasonable levels is going to require a supply-side solution.
This of course sounds a lot easier than it actually is. Ignoring for the moment that GPU orders take months to process – there are a lot of steps in making a 16nm/14nm FinFET wafer – the bigger risk is that cryptocurrency-induced GPU demand is not stable. Ramping up GPU production means gambling that demand will stay high enough long enough to absorb the additional GPUs, and then not immediately contract and have the market flooded with used video cards. The latter being an important point that AMD got burnt on the last time this happened, when the collapse of the cryptocurrency-prices and the resulting demand for video cards resulted in the market becoming flooded with used Hawaii (290/390 series) cards.
Getting to the heart of matters then, in yesterday’s Q&A session for their Q4’2017 earnings call, an analyst asked AMD about the current GPU supply situation and whether AMD would be ramping up GPU production. The answer, much to my surprise, was yes. But with a catch.
Q: I just had a question on crypto, I mean if I look at the amount of hash compute being added to Ethereum in January I mean it's more than the whole of Q4, so we have seen a big start to the Q1. […] And is there any sort of acute shortages here, I man can your foundry partners do they have the capacity to support you with a ramp of GPUs at the moment and is there enough HBM2 DRAM to source as well?
A: Relative to just where we are in the market today, for sure the GPU channel is lower than we would like it to be, so we are ramping up our production. At this point we are not limited by silicon per se, so our foundry partners are supplying us, there are shortages in memory and I think that is true across the board, whether you are talking about GDDR5, or you’re talking about high bandwidth memory. We continue to work through that, with our memory partners and that will be certainly one of the key factors as we go through 2018.
So yes, AMD is ramping up GPU production. Which is a surprising move since they were burnt the last time they did this. At the same time however, while cryptocurrency demand has hit both major GPU manufacturers, AMD has been uniquely hit as they’re a smaller player less able to absorb rapid changes in demand, and, more importantly, their GPUs are better suited for the task. AMD’s tradition of offering more memory bandwidth and more raw FLOPS than NVIDIA at any competing price point, coupled with some meaningful architectural differences, means that their GPUs are in especially high demand by cryptocurrency miners.
But perhaps the more interesting point here isn’t that AMD is increasing their GPU production, but why they can only increase it by so much. According to the company, they’re actually RAM-limited. They can make more GPUs, but they don’t have enough RAM – be it GDDR5 or HBM2 – to equip all of the cards AMD and board partners would like to make.
This is an interesting revelation, as this is the first time memory shortages have been explicitly identified as an issue in this latest run-up. We’ve known that the memory market is extremely tight due to demand – with multiple manufacturers increasing their RAM prices and diverting GDDR5 production over to DDR4 – but only now is that catching up with video card production to the point that current GDDR5 production levels are no longer “enough”. Of course RAM of all types is still in high demand here at the start of 2018, so while memory manufacturers can reallocate some more production back to GDDR5, GPU and board vendors have to fight with both the server and mobile markets, both of which have their own booms in demand going on, and are willing to pay top dollar for the RAM they need.
GDDR5: The Key To Digital Gold
In a sense the addition of cryptocurrency to the mix of computing workloads has created a perfect storm in an industry that was already dealing with RAM shortages. The RAM market is in the middle of a boom right now – part of its traditional boom/bust cycle – and while it will eventually abate as demand slips and more production gets built, for the moment cryptocurrency mining has just added yet more demand for RAM that isn’t there. Virtually all supply/demand problems can be solved through higher prices – at some point, someone has to give up – but given the trends we’ve seen so far, GPU users are probably the most likely to suffer, as traditionally the GPU market has been built on offering powerful processors paired with plenty of RAM for paltry prices. Put another way, even if the GPU supply situation were resolved tomorrow and there were infinite GPUs for all, RAM prices would be a bottleneck that kept video card prices from coming back down to MSRP.
With all that said, however, AMD’s brief response in their earnings call has been the only statement of substance they’ve made on the matter. So while the company is (thankfully) ramping up GPU production, they haven’t – and are unlikely to ever – disclose just how many more GPUs that is, or for that matter how much RAM they expect they and partners can get for those new GPUs. So while any additional production will at least help the current situation to some extent, I would caution against getting too hopeful about AMD’s ramp-up bringing the video card shortage to an end.
31 Comments
View All Comments
willis936 - Wednesday, January 31, 2018 - link
I keep seeing “AMDs GPUs are more suited to mining”. That is old news. The 400 series was the king for a long time and arguably is currently second. The 500 series js a sidegrade and a downgrade when cost is factored in. Vega is not suited for mining at all in terms of efficiency or upfront cost. The nvidia 1060 and 1070 however are the current kings of cyrpto mining and have been for over half a year. Ramping up vega production for mining just doesn’t make sense. Maybe to satisfy the gaming market that can’t afford popular GPUs that have prices that are affected by the current mining craze. The 1080 isn’t a go to pick in terms of efficiency but it’s price is higher than normal because gamer demand has been funneled into it.Stuka87 - Wednesday, January 31, 2018 - link
At this exact point in time, the model of the card doesn't matter. Heck people are buying up mass quantities of 1080Ti's for mining. There was a photo on the forums last week of a couple that bought something like 100 of them. Sure its dumb to be buying those, but thats not stopping people.0102030405 - Thursday, February 1, 2018 - link
It's absolutely not dumb to be mining with 1080's. They get the best hash rate of any cards when mining Equihash.There's more than one algorithm in crypto
bpepz - Wednesday, January 31, 2018 - link
Actually the Vega GPU are insanely good at mining, the best at this time in fact at any price. The reason is certain algorithms like the one for Monero only need memory bandwidth for mining. Because of this Vegas can be under clocked to the point they only use 135watts at full load, then combine that with HBCC and under clocking the core, they can do 2100 H/S. Compare that to a 1080ti at $699 and 250watts that only does 800 H/S.Samus - Wednesday, January 31, 2018 - link
Yeah, I don’t get where he is getting his data from that VEGA is bad for mining. There are very clear reasons you can’t find them, and I don’t believe it is simply production related. Demand is extraordinarily high because of these stupid mining rigs people build. At the end of the day you are better off with ANTminers than a non-Titan nvidia GPU because of the nerfed floating point precision.Dragonstongue - Wednesday, January 31, 2018 - link
completely depends on the mining in question, Radeon have more or less ALWAYS been superior at crunching vs Nv counterparts, yes the newest Nv card are excellent because of the mentioned power usage at SOME types of mining, but, the Radeons on the other hand are still very much KING when it comes to crunching power because at the very least AMD did not take everything away to focus performance levels on specific use cases.Nv for years and years were quite power demanding and subsequently resulted in temperature issues (especially when the Vreg, capacitor etc they use were of a lower tolerance to heat then competing Radeons i.e 105c vs 125c) slowly generation after generation they focused their efforts to tweaking the design to get power use down while maintaining "some" of the grunt, this also allowed them to get a higher clock speed as a result which "appears" as they are so much faster, technically they are not directly, but indirectly because of the fancy tricks they use vs hardware/software etc they end up being quite fast.
As mentioned however, they are not in every case, some mining or hashing programs are infinitely faster on Radeon hardware especially when the software was written to take advantage of the way the Radeons are designed, for example code cracking 9/10 does not use Nv hardware, they use Radeon, mining is just another form of this, by all means am sure some forms of cracking or mining likely are tweaked to capitalize towards Geforce vs Radeon design.
Radeon are (at least for the moment) more like a sledgehammer more robust, whereas Nv have transitioned to more of a surgeons knife approach.
SniperWulf - Wednesday, January 31, 2018 - link
"I keep seeing “AMDs GPUs are more suited to mining”. That is old news. "This is where you are mistaken. The current architectures on both Red and Green teams excel at certian algorithms and are meh at others. Nv can't be touched in equihash, Lyra and a few others, while AMD can't be touched in Cryptonight, Ethash and a few others. It's all about picking the right tool for the job.
If you're planning to mine ethereum, why buy a 1070 Ti @ $449 (MSRP) to get 30Mh/s when a RX 480 @ $239 (MSRP) or RX 580 @ $229 (MSRP) can get the same job at 135W.
It wouldn't make sense to buy 1080 Ti's ($779 MSRP) to mine Monero @ 800H/s when a Vega 56 ($399 MSRP) can get 1900H/s at the same 135W.
On the flip side, I wouldn't by any Radeons if Zencash, Zclassic or Verge were my coin/algo of choice.
Granted that those prices mean dick in today's market, it's not about one company vs another, it's all about picking the right tool for the job.
Samus - Wednesday, January 31, 2018 - link
Oh that’s interesting. Thanks for the post. Didn’t realize different architectures excelled so much in different algorithms.mspamed - Monday, February 5, 2018 - link
As an owner of 2 gtx 1060 and 1 rx 570 4gb, the rx 570 was 20$ costlier. The gtx 1060 made 3 dollar during their height last month while rx 570 made 5$ and now when the mining profits are down, the gtx are making me 2.2 dollars while the rx 570 is making me 3.25 dollars per day. The GTX are undervolted but still take 110 Watts while the rx 570 only takes 90 Watts. Nvidia or atleast gtx 1060 are definitely not better than rx 570 for mining even at the current rates.Torrijos - Wednesday, January 31, 2018 - link
The question the is : Shouldn't they (GPU cards manufacturers) build special units with the right amount for crypto (I imagine a couple of GiB are enough) instead of letting cards with 8-16 GiB go to waste?