Comments Locked

67 Comments

Back to Article

  • Flunk - Thursday, February 26, 2015 - link

    This is an interesting article, not too surprising, but it's nice to see.

    I don't think we're going to see any DX12 games that are designed to be realistically playable on Intel's integrated GPU so it's not likely going to be a big issue.
  • nathanddrews - Thursday, February 26, 2015 - link

    I'm not sure that's a fair hypothesis. Star Swarm - as widely discussed - is not technically a game as much as it is a tech demo. It's clear that there is a massive reduction to batch submission time for all GPUs. In a realistic game scenario, I think this will dramatically improve minimum frames on Intel IGP, but won't affect much else, given the weak overall GPU power. I guess we'll see.

    Speaking of minimum frames... AT?
  • MrSpadge - Friday, February 27, 2015 - link

    Another point to consider is power consumption. Sicne most gaming on Intel GPUs will be done on mobile platforms, power is important. If the CPU needs to spend less power for the same result, more power is available to the GPU. This can either provide a speed up (if the GPU was power limited) or at least power savings & noise reductions (if the GPU was already running full throttle).
  • yankeeDDL - Thursday, February 26, 2015 - link

    Have games like Tomb Raider, Grid, F1 been designed to be plaied on iGPUs? Because I am using an A10-7300 and loving every minute of them. I am looking forward to a nice up-kick in performance "for free" once DX12 becomes available.
    There's really no reason *not* to play any future games on iGPUs, especially with iGPUs becoming noticeably powerful (hint: Carrizo). Obviously iGPUs are not for hard-core gamers, but for casual gamers, they start to make a whole lot of sense.
  • hpglow - Friday, February 27, 2015 - link

    I'm no hardcore gamer, but unless you are paying lol or the Sims I don't see the use of integrated graphics. I have a gf 760 and there are many times it frustrates me at QHD. Bandwidth is and will continue to be an issue with igpus.
  • kyuu - Friday, February 27, 2015 - link

    Not sure how your 760 having trouble at QHD resolutions is relevant to iGPUs. You should really have a stronger GPU than the 760 for QHD resolutions in modern games, by the by.

    Obviously, you're going to run at lower resolutions with an iGPU. The fact that you can't manage high resolutions in modern games on iGPUs hardly makes them useless.
  • silverblue - Friday, February 27, 2015 - link

    The aforementioned titles are CPU limited.
  • takeship - Friday, February 27, 2015 - link

    This is so wrong. Maybe...maybe...if you're playing at around 800x600. Otherwise, just no.
  • DanNeely - Thursday, February 26, 2015 - link

    In light of Intel's statement, should we expect an update ran on a newer version of Starswarm in the near future?
  • Ryan Smith - Thursday, February 26, 2015 - link

    Currently the Oxide guys are focused on GDC and getting their game up and running. The internal builds are not ready for distribution, and to be honest right now I don't know when Oxide will have a new SS build that is ready to go out.
  • dj_aris - Thursday, February 26, 2015 - link

    Will you just stop benchmarking DX12 using "Star Swarm"? You know what? A 50$ Pentium is faster than a 1000$ IVB-E in video encoding because of QuickSync. But, is it a faster processor? Nope. So, unless you have something real to show us (I mean ONE game, even a heavily patched - DX12 optimised one) then no helpful conclusions can be made, IMHO.
  • hfm - Thursday, February 26, 2015 - link

    Just because it's early days and things are in flux doesn't mean it isn't meaningful and helpful information about the current state of DX12 and things we can look to in the future. It's interesting reading.
  • HammerStrike - Thursday, February 26, 2015 - link

    I agree; if there is one thing the internet hates, it's speculating and forecasting based on incomplete information on how future changes will impact the tech landscape.
  • JarredWalton - Thursday, February 26, 2015 - link

    /sarcasm, right?
  • MrSpadge - Sunday, March 1, 2015 - link

    Maybe not from dj_aris' point of view. Otherwise: definitely.
  • heffeque - Thursday, February 26, 2015 - link

    You start with a "Will you just stop" and you finish with an "IMHO".
    That's pretty incongruous on your part.
    Next time end with an IMSO "In My Superior Opinion" (or "In My Stupid Opinion" which is more likely the case), or start off with something that doesn't sound like a tremendous asshole.
  • dj_aris - Thursday, February 26, 2015 - link

    In that case I'm sorry. In my stupid opinion then, Star Swarm still isn't a game and I still I think nobody should care about DX12 performance on that. And again, sorry for being stupid.
  • cwolf78 - Thursday, February 26, 2015 - link

    Intel fanboy feeling threatened? That's how that reads to me anyway. You don't honestly think that DX12 is going to, by some miracle, make gaming a palatable experience on Intel iGPU's do you? There's just not enough hardware there regardless of how optimized the API's are that run on it.
  • formulav8 - Thursday, February 26, 2015 - link

    What I was thinking it was as well.
  • takeship - Friday, February 27, 2015 - link

    Same here. Cool to see the lower dispatch times, but none of these chips become playable even with the increases. Maybe the top Kaveri, but it's still a well sub-30 average, who knows about the minimums. It's a tool tech exercise, but wouldn't read too much into it.
  • eanazag - Thursday, February 26, 2015 - link

    He's an angry elf.

    Anyhow, I'm happy to see the results. It confirms some assumptions and adds unexpected information. It confirms that Intel iGPUs weren't CPU limited in the first place and therefore the gains shown would not be as pronounced as AMD's. Secondly, it demonstrates a weakness in Intel's iGPU that I wasn't aware was there; this is particularly important on the 4770R, which is a pricey chip. I don't think it means much today, but possibly in several years a game that has more batch calls will severely underperform on iGPU.

    I don't think Intel's mobile lineup will do any better. The difference we will see is thermal and power improvements while gaming due to less work being necessary. Laptops like the Razer Blade may see the biggest changes. That's exciting.

    My takeaway here is that for a iGPU gaming desktop if purchased right now, AMD will be the better option for the next few years of ownership. Unfortunately, AMD seems to be abandoning desktop from future chips at the moment. It is a shame because DX12 makes them more relevant. This does bolster the value laptop gaming market in favor of AMD. At $500-600 and lower I will be recommending AMD to those light gamers or dablers. HSA needs to come out swinging in an application or two for AMD to move up the market tiers for recommendations.
  • patrickjp93 - Friday, February 27, 2015 - link

    You have to remember Intel only started doing real 3D graphics designs 5-6 years ago. Everything before that was an implementation of a PowerVR/3DFX design or an in-house meant to just drive graphics enough for business applications (knocking Nvidia and AMD out of that client space).

    Right now Intel is focused on GPGPU compute anyway, hence the 50% core jump on Skylake and putting the iGPU on Skylake Xeons to aid compute density and offer a synergistic layer to work alongside the KNL Xeon Phi and socketed chips under OpenMP and OpenCL. Any wins in gaming are just icing on the cake for Intel. Right now they're after spilling Nvidia's blood in the server/supercomputer space after Nvidia pulled a bunch of graphics licensing during the Larrabee project. The accelerator world used to be almost universally Teslas. Now not so much, especially since CUDA is a rare skill compared to C++ and most HPC courses take some time to teach OpenMP usage.

    And of course Intel has to fight off HSA as well, though it looks like adoption rates are so slow as to be negligible unless Zen is a perfect competitor. 2016 will be the biggest enterprise chip war since the great slugfest between Intel and IBM back during the days of the 8086.
  • mr_tawan - Friday, February 27, 2015 - link

    What would be a better option for benchmarking/previewing DX12 performance then ?
  • MrSpadge - Friday, February 27, 2015 - link

    Feel free to ignore anything labeled "preliminary" and let the rest of us enjoy peeking at the potential of DX12. The articels are very clear about the fact that this performance won't translate directly into real games.
  • MikeMurphy - Thursday, February 26, 2015 - link

    I wonder if the significant increases for AMD APUs will translate into substantially better Xbox One performance.
  • lioncat55 - Thursday, February 26, 2015 - link

    I think it will depend on the game. With the Xbox One and PS4 they have Mantel. We have seen that there is not a huge gain from Mantel to DX12. I think it will take time for the developers to get use to the lower level coding and get the full power out of the current gen consoles.
  • dragonsqrrl - Thursday, February 26, 2015 - link

    Mantle was designed specifically for PC. The Xbox One and PS4 don't use Mantle, AMD came out and addressed this topic a long time ago.
  • jabber - Friday, February 27, 2015 - link

    Mantle was designed specifically to force Microsoft to properly write optimised code/prceedures for DirectX 12.

    Now that mantles job is done AMD can drop it.
  • Gigaplex - Thursday, February 26, 2015 - link

    What makes you think DirectX 12 will have any noticeable performance benefit on consoles? They already use a low level API. DirectX 12 brings a console style API to the desktop, this isn't a brand new innovation.
  • mkozakewich - Thursday, February 26, 2015 - link

    Are games generally tuned better for AMD and NVidia cards than for Intel? If Intel is reporting different results with the newer engine, it makes it sound like either games have to target specific architectures or it's just really easy for a game to miss optimizations on platforms they don't bother testing.
  • mr_tawan - Friday, February 27, 2015 - link

    My guess is, SS was at first created with Mantle in mind therefore they worked closed with AMD. Then the DX12 shows up so they patch there engine. It's possible that the CPU optimization was left intact and in favor of AMD APU.

    It's just a wild guess with no supporting evidence, btw.
  • azazel1024 - Thursday, February 26, 2015 - link

    If/when Intel publishes drivers, if you could could test what happens with Bay trail I would be VERY interested to see the impact. As you mentioned, things are somewhat less loopside with Intel's mobile offerings.

    I am not crossing my fingers for 100% improvements, but I'll take anything I can get (and just hope that a few of my favorite games get DX12 ports, as well as seeing Bay Trail WDDM2.0 drivers)
  • kyuu - Friday, February 27, 2015 - link

    Does the Bay Trail iGPU support DX12?
  • azazel1024 - Friday, February 27, 2015 - link

    No idea if it will or not. Until Intel released/releases the drivers for their Core line-up, those processors don't support DX12 either. Bay Trail DOES support DX11, so I'd suspect that it will support DX12 in some fashion as well.
  • The_Assimilator - Thursday, February 26, 2015 - link

    Intel needs to accept that their integrated graphics in their current form, will never be good enough and make a decision: either license IP from someone who knows what they're doing, or form a team to build a truly performant iGPU. Personally I'd be very interested to see an nVIDIA GPU and Intel CPU sharing a die...
  • formulav8 - Thursday, February 26, 2015 - link

    Intel actually already mooches off of NVidia. They wouldn't even be where they are now with IGPs if not for NVidia.
  • CajunArson - Thursday, February 26, 2015 - link

    Do you have any proof of that or are you just a stupid fanboy?
    If Intel really "mooched" off of Nvidia's graphics then we'd have Maxwell IGPs wouldn't we?
    So how come those aren't what Intel ships?
  • D. Lister - Thursday, February 26, 2015 - link

    Calm down, he's probably referring to the cross-licensing deal between Intel and Nvidia where Intel gets access to Nvidia's GPU/HPC tech: http://www.digitaltrends.com/computing/intel-and-n...

    Although the term "mooching" is perhaps a tad dramatic since Nvidia gets paid $1.5 billion for it.
  • formulav8 - Friday, February 27, 2015 - link

    Yes. Thought that was common knowledge. Apparently not for him.
  • mr_tawan - Friday, February 27, 2015 - link

    Imagination, probably :).
  • azazel1024 - Friday, February 27, 2015 - link

    Never be good enough for AAA titles with anything but really low res/low details? Probably. Then again, look at AMD, I don't see playable performance here on any of their APUs either. Sure, a lot closer, but even under DX12, 21FPS is not really playable and that is on low settings. Granted, this IS a stress test.

    Intel iGPUs, especially their latest ones certainly seem good enough for older games or newer light weight ones. I mean, crap, on my T100 with just a Bay Trail in it, I can play HL2 at 768p with medium details and never notice slow down. I can play KSP with details minimized at 720p and be overall pretty playable. On my i5-3317u equipped laptop I can play KSP at 768p with high details and no AA, or 2x AA and medium details and it is pretty playable.

    Civ 5 works great across both plus a number of other titles.

    Am I playing the latest, great at 1080p, 8xAA and max detail? Oh heck no.

    An iGPU from ANYONE is never going to make a hardcore game machine, it likely won't even make a good game machine, but for lighter weight stuff, it'll work great if thin, light and long battery life are the goals.

    As it stands, AMD is still a lot better at the iGPU game IF POWER LEVELS ARE OF NO CONCERN. Look at AMDs light stuff in the 15-25w TDP range, Intel actually has better or on part iGPU performance with their 15w and 28w TDP parts, because AMD's designs just suck soooooo bad at the low power thing. In hand with somewhat better iGPU performance in that TDP range, Intel also brings to the table MASSIVELY better CPU performance (from 1.5-4x faster).

    In standard range of 35-47w, AMD generally as better performance or similar on their iGPUs. For desktop, they are a lot better because they have massive iGPUs there by comparison to Intel (who effectively have iGPUs no better than a laptop, despite 2+x the power and heat budget).

    Intel does seem to be creeping up on AMD iGPUs even in less power constrianed usage scenarios. Not by much, but they are. Remember back to Sandy when AMD had easily a 200+% lead in performance. It is now down in the 30-60% range in most real games.

    What AMD needs is a new CPU arch as well as smaller process node to be on. Their GPUs (while I don't like them as much as Nvidia's) are rather good, especially in the iGPU world, but the power budget is screwing them, combined with frankly terrible CPU architecture.
  • lefty2 - Thursday, February 26, 2015 - link

    I take it Star Swarm is the only game DirectX 12 game available at the moment?
  • Ryan Smith - Thursday, February 26, 2015 - link

    Correct.
  • Sushisamurai - Thursday, February 26, 2015 - link

    Errr... Why are the RAM speeds different for the AMD and Intel test beds? AMD test bed was listed as 2133 MHz C9, versus the 1866 down-clocked to 1600 for Intel...? Doesn't that add some confounding conclusions to the data set and more variables, especially since the data isn't normalized for the RAM differences?
  • blanarahul - Thursday, February 26, 2015 - link

    Lol! Good one!

    On a serious note I believe he was referring to himself.
  • blanarahul - Thursday, February 26, 2015 - link

    The GPU was the bottleneck in this benchmark on Intel chips in D3D11 and D3D12, so obviously there are less benefits.
  • blanarahul - Thursday, February 26, 2015 - link

    Huh, looks like the reply system is broken. Or maybe my system is broken. *sighs*
  • D. Lister - Thursday, February 26, 2015 - link

    There is surely more to this then meets the eye. Perhaps Intel would do a little better in real games with their cheaper, low-powered, mobile processors, especially for games like the later Grid iterations from Codemasters that specifically target Intel iGPUs.

    I mean there has to be something, because I seriously doubt that Intel would be a part of the DX12 development purely out of the goodness of their corporate heart.
  • D. Lister - Thursday, February 26, 2015 - link

    PS: BTW, I don't expect Intel to surpass or even equal the AMD APUs with a mere software fix, but even a 10-20% gain could put them in a much more competitive position in the lower-end segments where AMD has always had a significant performance/dollar advantage.
  • silverblue - Friday, February 27, 2015 - link

    Intel iGPUs would stand to benefit tremendously in games which are CPU bound; Dota 2 is a good example.
  • D. Lister - Friday, February 27, 2015 - link

    DX12 (apparently) takes the CPU out of the equation, so it essentially shifts the bottleneck to the GPU. This is great for the APUs because they're a "weak CPU+strong iGPU" combo. In case of Intel, it is the complete opposite - where the weak IGPU is already a bottleneck all the way down to their Pentium lineup.

    So, upon further contemplation, it appears that for Intel, the main advantage would be for their various Atoms. It is at that level where they have a much stiffer competition in both performance/watt and performance/dollar.
  • behrouz - Friday, February 27, 2015 - link

    in Medium Quality , A8-7600 is faster than A10-7800? Typo ?
  • Stahn Aileron - Friday, February 27, 2015 - link

    Is it just me or has AnandTech's writing style gotten more wordy/fluffy over the past year or two? No offense to the authors and editors, but for tech-oriented articles and news, the writing feels more like it was created by a Liberal Arts major rather than a experience Tech enthusiast.

    Maybe this is because I'm currently going to school for an Associates in Electrical Engineering: I had to take a couple of writing classes, so I've had a refresher in technical composition writing. I'm starting to find the readability and flow of the article contents lacking in a way. (I find myself re-reading parts more often than I recall previously just to make sure I read it right.)

    Also, is someone on an anti-comma campaign? Readability and flow are a bit off, IMHO, but it gets compounded when commas aren't used where they should/could be. (Feels like I'm playing a game of "Find the Transitions".
  • D. Lister - Friday, February 27, 2015 - link

    It is probably just you. It seems that the couple of writing classes that you took, as a refresher in technical composition writing at your school for Associates in Electrical Engineering, have ultimately raised your reading standards far beyond mere mortals. :P
  • tipoo - Friday, February 27, 2015 - link

    Has the A10 beat the GT3e so handily in other DX11 titles? I don't really recall, but I thought the GT3e was up there with AMDs best, often on top? Maybe the lack of Intel performance here is down to drivers or Star Swarm? Just thinking.
  • tipoo - Friday, February 27, 2015 - link

    *Star swarm optimization for it, I mean. Of course it's down to star swarm itself :P
  • tipoo - Friday, February 27, 2015 - link

    Hmm yeah I just consulted this, and the 5200 usually comes on top. I think something in Star Swarm does not like the Iris Pro in specific.

    http://www.notebookcheck.net/Mobile-Graphics-Cards...
  • Crunchy005 - Friday, February 27, 2015 - link

    Wait that shows the iris pro 5200 below the AMD APUs in pretty much everything. Find the a10-7850k and compare, even the a10-7700 is ahead of intel iris pro. First game on list for the a10-7850k is 20% faster than iris on low settings and 10% faster than the 7700 which puts the 7700 above iris pro. Dragon age inquisition it doesn't even show iris pro and the intel hd 4600 is 50% slower.

    Battlefield 4 doesn't take as much power and the AMD APUs can play it on high settings and iris pro keeps up with only a 6% deficit. Iris pro can keep up in lower load situations but the more stress you put on it the more it seems to drop off from the APUs.
  • Crunchy005 - Friday, February 27, 2015 - link

    Wish we could edit. meant to paste this in.

    http://www.notebookcheck.net/AMD-Radeon-R7-in-A10-...
  • jabber - Friday, February 27, 2015 - link

    Interesting to see how hardware performance shifts when the coding 'optimisations' shift.
  • Klimax - Saturday, February 28, 2015 - link

    Why is still anybody using broken benchmark Star Swarm. Just number of batches is stupidly high. (Yes, we know that there has to be overhead, nobody sane uses on batch for three objects - waste of resources and time.) But when one does deep dive into SS you discover how atrociously written that POS is. Dozens of calls of functions with identical parameters called thousands times per frame. (And in conjunction with tiny batches it will murder performance obviously)

    SS is nothing more then example how badly you can write engine.
  • psyph3r - Tuesday, March 3, 2015 - link

    You apparently are not familiar with a stress test. It generates tons of stuff for the computer to do. It is meant to overload even to fastest system. This was explicitly made to demostrate draw call handling. Don't be so cynical and quick to rationalize something that doesn't fit your normalcy bias.
  • AMX18 - Monday, March 2, 2015 - link

    Good Morning Ryan, good preview, but where is the part 4?
    AMD Phenom II x6 and FX CPU Scalling? Direct X12 is for AMD Phenom II Based (Jaguar) Console and is a Octal Core Desing, would be good see how scales DX12 with 6 and 8 Core AMD processors, not just APU (without lv3 cache), i think than a FX would have better performance than APUs in DX12 even in this case, DX12
  • AMX18 - Monday, March 2, 2015 - link

    DX12 stills using 2 to 4 cores, but i think that depends on GPU load, somebody said that in Discrete GPU DX12 Preview the GTX980 was using its 80% and R9 290X was using just 50%, then.. calculating, when the R9 290X uses 80% could have up to 15% higher graphics performance than GTX980 but higher power compsumition

Log in

Don't have an account? Sign up now