E3 2005 - Day 2: More Details Emerge on Console GPUs
by Anand Lal Shimpi on May 19, 2005 1:24 PM EST- Posted in
- Trade Shows
Scratching the Surface of NVIDIA's RSX
As we mentioned before, NVIDIA's RSX is the more PC-like of the two GPU solutions. Unlike ATI's offering, the RSX is based on a NVIDIA GPU, the upcoming G70 (the successor to the GeForce 6).
The RSX is a 90nm GPU weighing in at over 300 million transistors and fabbed by Sony at two plants, their Nagasaki plant and their joint fab with Toshiba.
The RSX follows a more conventional dataflow, with discrete pixel and vertex shader units. Sony has yet to announce the exact number of pixel and vertex shader units, potentially because that number may change as time goes by depending on yields. This time around Sony seems to be very careful not to let too many specs out that are subject to change to avoid any sort of backlash as they did back with the PS2. Given the transistor count and 90nm process, you can definitely expect the RSX to feature more than the 16 pipes of the present day GeForce 6800 Ultra. As for how many, we'll have to wait for Sony on that.
NVIDIA confirmed that the RSX is features full FP32 support, like the current generation GeForce 6 as well as ATI's Xbox 360 GPU. NVIDIA did announce that the RSX would be able to execute 136 shader operations per cycle, a number that is greater than ATI's announced 96 shader ops per cycle. Given that we don't know anything more about where NVIDIA derived this value from, we can't be certain if we are able to make a direct comparison to ATI's 96 shader ops per cycle.
Given that the RSX is based off of NVIDIA's G70 architecture, you can expect to have a similar feature set later this year on the PC. In fact, NVIDIA stated that by the time PS3 ships there will be a more powerful GPU available on the desktop. This is in stark contrast to ATI's stance that a number of the features of the Xbox 360 GPU won't make it to the desktop for a matter of years (potentially unified shader architecture), while others will never be seen on the desktop (embedded DRAM?).
There will definitely be some differences between the RSX GPU and future PC GPUs, for a couple of reasons:
1) NVIDIA stated that they had never had as powerful a CPU as Cell, and thus the RSX GPU has to be able to swallow a much larger command stream than any of the PC GPUs as current generation CPUs are pretty bad at keeping the GPU fed.
2) The RSX GPU has a 35GB/s link to the CPU, much greater than any desktop GPU, and thus the turbo cache architecture needs to be reworked quite a bit for the console GPU to take better advantage of the plethora of bandwidth. Functional unit latencies must be adjusted, buffer sizes have to be changed, etc...
We did ask NVIDIA about technology like unified shader model or embedded DRAM. Their stance continues to be that at every GPU generation they design and test features like unified shader model, embedded DRAM, RDRAM, tiling rendering architectures, etc... and evaluate their usefulness. They have apparently done a unified shader model design and the performance just didn't make sense for their architecture.
NVIDIA isn't saying that a unified shader architecture doesn't make sense, but at this point in time, for NVIDIA GPUs, it isn't the best call. From NVIDIA's standpoint, a unified shader architecture offers higher peak performance (e.g. all pixel instructions, or all vertex instructions) but getting good performance in more balanced scenarios is more difficult. The other issue is that the instruction mix for pixel and vertex shaders are very different, so the optimal functional units required for each are going to be different. The final issue is that a unified shader architecture, from NVIDIA's standpoint, requires a much more complex design, which will in turn increase die area.
NVIDIA stated that they will eventually do a unified shader GPU, but before then there are a number of other GPU enhancements that they are looking to implement. Potentially things like a programmable ROP, programmable rasterization, programmable texturing, etc...
Final Words
We're going to keep digging on both of these GPUs, as soon as we have more information we'll be reporting it but for now it's looking like this is the best we'll get out of Microsoft and Sony.
22 Comments
View All Comments
Shinei - Thursday, May 19, 2005 - link
Jarred, I thought ATI made the XBox 2 GPU specifically for the console, and wasn't incorporating any of its features into the R520? I'm not sure I agree that spending most of your R&D on a "dead-end" GPU is the best tactic; nVidia's approach of optimizing an existing desktop GPU architecture seems to be the more efficient way to spend R&D capital.It also allows nVidia to take any lessons learned from the PS3 GPU and add/modify them when they finally release the G70 (hopefully with fully functional PureVideo, not just "sort of functional" PureVideo--I'm paying for the transistor real estate in price and heat, I better be able to use it this time!)...
JarredWalton - Thursday, May 19, 2005 - link
Low Roller - I wouldn't put too much stock in that figure for the X360 GPU. The way the chip is designed (split in two pieces), I wouldn't be surprised to find that one piece is 150 million and the other is maybe 150 to 200 million.My DRAM knowledge is a bit fuzzy, and the "Embedded DRAM" is something we don't have specifics on, but 10MB of RAM represents 83,886,080 bits, and best case scenario you're using 1 transistor per bit. SRAM uses 6, and perhaps DRAM is 2? 2 transistors per bit would already put just the embedded RAM at 167,772,160 transistors. Heh. 150 million is WAY too small, no matter what IGN says.
As a separate thought, I wouldn't be surprised to see the Xbox 360 GPU end up the more powerful of the two graphics chips. The reason is based on inference: R4xx is very similar to R3xx, meaning ATI didn't spend as much resources creating R4xx as NVIDIA spent on NV4x. If their R&D teams are about equal in size, where did ATI's extra efforts end up being spent? That's right: the Xbox 360 GPU. This is simply a deductive guess, and it could be wrong, but it's something to consider. NVIDIA spent a lot of effort recovering from the NV3x (FX) fiasco.
What makes this all really entertaining to me is the following:
http://www.anandtech.com/news/shownews.aspx?i=2427...
If anything, it seems like the PS3 GPU is more of a PC design with less "future technology". In other words, everything said by MS and Sony is complete hype and should be taken with a massive helping of salt. :)
Iftekharalam - Thursday, May 19, 2005 - link
Which graphics processor will be more powerful?The XBOX 360 or the PS3? The Nintendos future gaming console also uses ATI's GPU codenamed "Hollywood".
Low Roller - Thursday, May 19, 2005 - link
AnandTech's article says they were not able to get a transistor count out of ATI for the Xbox 360.According to IGN, the Xbox 360's GPU only has 150 million transistors, compared to the G70's 300 million.
http://xbox360.ign.com/articles/612/612995p1.html?...
araczynski - Thursday, May 19, 2005 - link
nice info.too bad i could care less which gpu is used in which console, i'm more interested in which console will have some original quality games...
R3MF - Thursday, May 19, 2005 - link
sounds good, shame about the non unified shader model tho.maybe nvidia are right, but i like advanced tech. :p
Shinei - Thursday, May 19, 2005 - link
nVidia's apparently pulling out the 16" battle cannons with RSX/G70--136 shader ops per clock is damn impressive, regardless of whether the GPU is console or desktop...And if nVidia says the desktop G70's going to be even more powerful than RSX, I'm willing to bet that there will be at least 10 shader units pumping to 24+ pipes in full FP32 quality. Nice. :)
AnandThenMan - Thursday, May 19, 2005 - link
Very very interesting article. ATi and NVIDIA seem to have diverging paths. All things considered, this tells me that ATi has the more advanced GPU. Does that mean a faster GPU though.EODetroit - Thursday, May 19, 2005 - link
How about anything about the new physics processor?Garyclaus16 - Thursday, May 19, 2005 - link
...sooo much for my bfg6800U...seems like I'm already way behind again.