Meet The 2013 GPU Benchmark Suite & The Test

Having taken a look at the compute side of Titan, let’s finally dive into what most of you have probably been waiting for: our gaming benchmarks.

As this is the first major launch of 2013 it’s also the first time we’ll be using our new 2013 GPU benchmark suite. This benchmark suite should be considered a work in progress at the moment, as it’s essentially incomplete. With several high-profile games due in the next 4 weeks (and no other product launches expected), we expect we’ll be expanding our suite to integrate those latest games. In the meantime we have composed a slightly smaller suite of 8 games that will serve as our base.

AnandTech GPU Bench 2013 Game List
Game Genre
DiRT: Showdown Racing
Total War: Shogun 2 Strategy
Hitman: Absolution Action
Sleeping Dogs Action/Open World
Crysis: Warhead FPS
Far Cry 3 FPS
Battlefield 3 FPS
Civilization V Strategy

Returning to the suite will be Total War: Shogun 2, Civilization V, Battlefield 3, and of course Crysis: Warhead. With no performance-demanding AAA strategy games released in the last year, we’re effectively in a holding pattern for new strategy benchmarks, hence we’re bringing Shogun and Civilization forward. Even 2 years after its release, Shogun 2 can still put an incredible load on a system on its highest settings, and Civilization V is still one of the more advanced games in our suite due to its use of driver command lists for rendering. With Company of Heroes 2 due here in the near future we may finally get a new strategy game worth benchmarking, while Total War will be returning with Rome 2 towards the end of this year.

Meanwhile Battlefield 3 is still among the most popular multiplayer FPSes, and though newer video cards have lightened its system-killer status, it still takes a lot of horsepower to play. Furthermore the engine behind it, Frostbite 2, is used in a few other action games, and will be used for Battlefield 4 at the end of this year. Finally we have the venerable Crysis: Warhead, our legacy entry. As the only DX10 title in the current lineup it’s good for tracking performance against our oldest video cards, plus it’s still such a demanding game that only the latest video cards can play it at high framerates and resolutions with MSAA.

As for the new games in our suite, we have added DiRT: Showdown, Hitman: Absolution, Sleeping Dogs, and Far Cry 3. DiRT: Showdown is the annual refresh of the DiRT racing franchise from Codemasters, based upon their continually evolving racer engine. Meanwhile Hitman: Absolution is last year’s highly regarded third person action game, and notably in this day and age features a built-in benchmark, albeit a bit of a CPU-intensive one. As for Sleeping Dogs, it’s a rare treat in that it’s a benchmarkable open world game (open world games having benchmarks is practically unheard of) giving us a rare chance to benchmark something from this genre. And finally we have Far Cry 3, the latest rendition of the Far Cry franchise. A popular game in its own right, its jungle environment can be particularly punishing.

These games will be joined throughout the year by additional games as we find games that meet our needs and standards, and for which we can create meaningful benchmarks and validate their performance. As with 2012 we’re looking at having roughly 10 game benchmarks at any given time.

Meanwhile from a settings and resolution standpoint we have finally (and I might add, begrudgingly) moved from 16:10 resolutions to 16:9 resolutions in most cases to better match the popularity of 1080p monitors and the recent wave of 1440p IPS monitors. Our primary resolutions are now 2560x1440, 1920x1080, and 1600x900, with an emphasis on 1920x1080 at lower setting ahead of dropping to lower resolutions, given the increasing marginalization of monitors with sub-1080p resolutions. The one exception to these resolutions is our triple-monitor resolution, which stays at 5760x1200. This is purely for technical reasons, as NVIDIA’s drivers do not consistently offer us 5760x1080 on the 1920x1200 panels we use for testing.

As for the testbed itself, we’ve changed very little. Our testbed remains our trusty 4.3GHz SNB-E, backed with 16GB of RAM and running off of a 256GB Samsung 470 SSD. The one change we have made here is that having validated our platform as being able to handle PCIe 3.0 just fine, we are forcibly enabling PCIe 3.0 on NVIDIA cards where it’s typically disabled. NVIDIA disables PCIe 3.0 by default on SNB-E systems due to inconsistencies in the platform, but as our goal is to remove every non-GPU bottleneck, we have little reason to leave PCIe 3.0 disabled. Especially since most buyers will be on Ivy Bridge platforms where PCIe 3.0 is fully supported.

Finally, we’ve also used this opportunity to refresh a couple of our cards in our test suite. AMD’s original press sample for the 7970 GHz Edition was a reference 7970 with the 7970GE BIOS, a configuration that was more-or-less suitable for the 7970GE, but not one AMD’s partners followed. Since all of AMD’s partners are using open air cooling, we’ve replaced our AMD sample with HIS’s 7970 IceQ X2 GHz Edition, a fairly typical representation of the type of dual-fan coolers that are common on 7970GE cards. Our 7970GE temp/noise results should now be much closer to what retail cards will do, though performance is unchanged.

Unfortunately we’ve had to deviate from that almost immediately for CrossFire testing. Our second HIS card was defective, so due to time constraints we’re using our original AMD 7970GE as our second card for CF testing. This has no impact on performance, but it means that we cannot fairly measure temp or noise. We will update Bench with those results once we get a replacement card and run the necessary tests.

Finally, we also have a Powercolor Devil13 7990 as our 7990 sample. The Devil13 was a limited run part and has been replaced by the plain 7990, the difference between them being a 25MHz advantage for the Devil13. As such we’ve downclocked our Devil13 to match the basic 7990’s specs. The performance and power results should perfectly match a proper retail 7990.

CPU: Intel Core i7-3960X @ 4.3GHz
Motherboard: EVGA X79 SLI
Power Supply: Antec True Power Quattro 1200
Hard Disk: Samsung 470 (256GB)
Memory: G.Skill Ripjaws DDR3-1867 4 x 4GB (8-10-9-26)
Case: Thermaltake Spedo Advance
Monitor: Samsung 305T
Video Cards:

AMD Radeon HD 7970
AMD Radeon HD 7970 GHz Edition
PowerColor Radeon HD 7990 Devil13
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 690
NVIDIA GeForce GTX Titan

Video Drivers: NVIDIA ForceWare 314.07
NVIDIA ForceWare 314.09 (Titan)
AMD Catalyst 13.2 Beta 6
OS: Windows 8 Pro

 

Titan’s Compute Performance, Cont DiRT: Showdown
Comments Locked

337 Comments

View All Comments

  • chizow - Saturday, February 23, 2013 - link

    I haven't use this rebuttal in a long time, I reserve it for only the most deserving, but you sir are retarded.

    Everything you've written above is anti-progress, you've set Moore's law and semiconductor progress back 30 years with your asinine rants. If idiots like you running the show, no one would own any electronic devices because we'd be paying $50,000 for toaster ovens.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Yeah that's a great counter you idiot... as usual when reality barely glints a tiny bit through your lying tin foiled dunce cap, another sensationalistic pile of bunk is what you have.
    A great cover for a cornered doofus.
    When you finally face your immense error, you'll get over it.

  • hammer256 - Thursday, February 21, 2013 - link

    Not to sound like a broken record, but for us in scientific computing using CUDA, this is a godsend.
    The GTX 680 release was a big disappointment for compute, and I was worried that this is going to be the trend going forward with Nvidia: nerfed compute card for the consumers that focuses on graphics, and compute heavy professional cards for the HPC space.
    I was worried that the days of cheap compute are gone. These days might still be numbered, but at least for this generation Titan is going to keep it going.
  • ronin22 - Thursday, February 21, 2013 - link

    +1
  • PCTC2 - Thursday, February 21, 2013 - link

    For all of you complaining about the $999 price tag. It's like the GTX 690 (or even the 8800 Ultra, for those who remember it). It's a flagship luxury card for those who can afford it.

    But that's beside the real point. This is a K20 without the price premium (and some of the valuable Tesla features). But for researchers on a budget, using homegrown GPGPU compute code that doesn't validate to run only on Tesla cards, these are a godsend. I mean, some professional programs will benefit from having a Tesla over a GTX card, but these days, researchers are trying to reach into HPC space without the price premium of true HPC enterprise hardware. The GTX Titan is a good middle point. For the price of a Quadro K5000 and a single Tesla K20c card, they can purchase 4 GTX Titans and still have some money to spare. They don't need SLI. They just need the raw compute power these cards are capable of. So as entry GPU Compute workstation cards, these cards hit the mark for those wanting to enter GPU compute on a budget. As a graphics card for your gaming machine, average gamers need not apply.
  • ronin22 - Thursday, February 21, 2013 - link

    "average gamers need not apply"

    If only people had read this before posting all this hate.

    Again, gamers, this card is not for you. Please get the cr*p out of here.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    You have to understand, the review sites themselves have pushed the blind fps mentality now for years, not to mention insanely declared statistical percentages ripened with over-interpretation on the now contorted and controlled crybaby whiners. It's what they do every time, they feel it gives them the status of consumer advisor, Nader protege, fight the man activist, and knowledgeable enthusiast.

    Unfortunately that comes down the ignorant demands we see here, twisted with as many lies and conspiracies as are needed, to increase the personal faux outrage.
  • Dnwvf - Thursday, February 21, 2013 - link

    In absolute terms, this is the best non-Tesla compute card on the market.

    However, looking at flops/$, you'd be better off buying 2 7970Ghz Radeons, which would run around $60 less and give you more total Flops. Look at the compute scores - Titan is generally not 2x a single 7970. And in some of the compute scores, the 7970 wins.

    2 7970ghz (not even in crossfire mode, you don't need that for OpenCL), will beat the crap out of Titan and cost less. They couldn't run AOPR on the AMD cards..but everybody knows from bitcoin that Amd cards rule over nvidia for password hashing ( just google bitcoin bit_align_int to see why).

    There's an article on Toms Hardware where they put a bunch of nvidia and amd cards through a bunch of compute benchmarks, and when amd isn't winning, the gtx 580 generally beats the 680...most likely due to its 512 bit bus. Titan is still a 384 bit bus...can't really compare on price because Phi costs an arm and a leg like Tesla, but you have to acknowledge that Phi is probably gonna rock out with its 512 bit bus.

    Gotta give Nvidia kudos for finally not crippling fp64, but at this price point, who cares? If you're looking to do compute and have a GPU budget of $2K, you could buy:

    An older Tesla
    2 Titans
    -or-
    Build a system with 2 7970Ghz and 2 Gtx 580.

    And the last system would be the best...compute on the amd cards for certain algorithms, on the nvidia cards for the others, and pci bandwidth issues aside, running multiple complex algorithms simultaneously will rock because you can enqueue and execute 4 OpenCL kernels simultaneously. You'd have to shop around for a while to find some 580's though.

    Gamers aren't gonna buy this card unless they're spending Daddy's money, and serious compute folk will realize quickly that if they buy a mobo that will fit 2 or 4 double-width cards, depending on Gpu budget, they can get more flops per dollar with a multiple-card setup (think of it as a micro-sized Gpu compute cluster). Don't believe me? Google Jeremi Gosni oclhashcat.

    I'm not much for puns, but this card is gonna flop. (sorry)
  • DanNeely - Thursday, February 21, 2013 - link

    Has any eta on when the rest of the Kepler refresh is due leaked out yet?
  • HisDivineOrder - Thursday, February 21, 2013 - link

    It's way out of my price range, first and foremost.

    Second, I think the pricing is a mistake, but I know where they are coming from. They're using the same Intel school of thought on SB-E compared to IB. They price it out the wazoo and only the most luxury of the luxury gamers will buy it. It doesn't matter that the benchmarks show it's only mostly better than its competition down at the $400-500 range and not the all-out destruction you might think it capable of.

    The cost will be so high it will be spoken of in whispers and with wary glances around, fearful that the Titan will appear and step on you. It'll be rare and rare things are seen as legendary just so long as they can make the case it's the fastest single-GPU out there.

    And they can.

    So in short, it's like those people buying hexacore CPU's from Intel. You pay out the nose, you get little real gain and a horrible performance per dollar, but it is more marketing than common sense.

    If nVidia truly wanted to use this product to service all users, they would have priced it at $600-700 and moved a lot more. They don't want that. They're fine with the 670/680 being the high end for a majority of users. Those cards have to be cheap to make by now and with AMD's delays/stalls/whatever's, they can keep them the way they are or update them with a firmware update and perhaps a minor retooling of the fab design to give it GPU Boost 2.

    They've already set the stage for that imho. If you read the way the article is written about GPU Boost 2 (both of them), you can see nVidia is setting up a stage where they introduce a slightly modified version of the 670 and 680 with "minor updates to the GPU design" and GPU Boost 2, giving them more headroom to improve consistency with the current designs.

    Which again would be stealing from Intel's playbook of supplement SB-E with IB mainstream cores.

    The price is obscene, but the only people who should actually care are the ones who worship at the altar of AA. Start lowering that and suddenly even a 7950 is way ahead of what you need.

Log in

Don't have an account? Sign up now