Comments Locked

31 Comments

Back to Article

  • jjj - Monday, August 29, 2016 - link

    The second graph on page 3 should be flipped upside down as lower latency is better and right now it is misleading if you aren't paying attention.
  • snowmyr - Monday, August 29, 2016 - link

    http://imgur.com/a/GxZWh
    You're Welcome
  • kebo - Tuesday, August 30, 2016 - link

    +1 internets
  • Gigaplex - Monday, August 29, 2016 - link

    "Upon booting into the BIOS after installation, I found that the memory was only configured to run at 2667 MHz. Altering the 'Automatic' DRAM timings to 'Manual' and 'user-defining' the various timing parameters as printed on the SODIMM label (16-18-18-43) enabled higher frequency operation."

    I'm not surprised. My G.Skill RAM (DDR3) also didn't perform as advertised in a plug and play fashion, and when I emailed to complain, they acted as if it was normal for manual entry to be required. So much for XMP compliance.
  • Ian Cutress - Monday, August 29, 2016 - link

    The system BIOS automatically loads the SPD profile of the memory kit unless the XMP option is enabled. In most systems, XMP is disabled as the default option because kits without XMP (most of the base ones) don't exist. Also, the SPD profile is typically left as the base JEDEC settings to ensure full compatibility.

    If you want true plug and play of high speed memory kits, one of two things need to happen:

    1) XMP is enabled by default (but not all memory will work)
    2) Base SPD profiles on the memory should be the high-speed option (means the memory won't work in systems not geared for high performance)

    There are a number of Kingston modules, typically DDR4-2400/2666, that will use option number (2). Some high-end motherboards have an onboard switch for (1). For everything else, it requires manually adjusting a setting in the BIOS.

    The problem, as always, is maintaining wide compatibility. Just in case someone buys a high-end memory kit but wants to run it at base JEDEC specifications, because the hardware they are moving the kit into doesn't support the high frequency.
  • TheinsanegamerN - Monday, August 29, 2016 - link

    dissapointing to see nearly no improvement in gaming benchmarks. You'd figure that a big iGPU would need more bandwidth with newer games.

    Perhaps current iGPUs just are not powerful enough. Maybe AMD will fix that with zen APUs next year.
  • Ian Cutress - Monday, August 29, 2016 - link

    It's a function of the embedded DRAM. You would expect DRAM speed to affect the iGPU less when eDRAM is present because it provides a large 50GB/s bidirectional DRAM buffer. Without eDRAM, I would expect the differences in gaming results to be more. Will have to do testing to find out - this piece was focusing primarily on the Skull Canyon environment which lists high speed memory support as a benefit.
  • Samus - Monday, August 29, 2016 - link

    I haven't seen a memory frequency roundup like this since Sandy Bridge, which did show a slight benefit (more than Skylake for sure) moving from DDR3 1066 through 1333, 1600 and so on. Haswell I'm sure is a similar story. I had noticeable performance improvements on AM3+ platforms going from 1600 to 2400 especially in regard to the embedded GPU.

    With sky lake it seems you are just wasting your money to run potentially less reliable, more expensive memory out of specification. But I wonder if CPUs without the eDRAM have the same flat scale?
  • Ian Cutress - Monday, August 29, 2016 - link

    Ivy Bridge: http://www.anandtech.com/show/6372/memory-performa...

    Haswell: http://www.anandtech.com/show/7364/memory-scaling-...
  • Samus - Monday, August 29, 2016 - link

    Oh cool, thanks Ian! Should have figured you guys keep up with it.
  • zodiacfml - Monday, August 29, 2016 - link

    AMD APUs please.
  • tipoo - Monday, August 29, 2016 - link

    Neat test. Interesting that the Iris Pro barely sees any scaling with memory bandwidth, indicating to me that the eDRAM really does cover its bandwidth needs nicely.

    I feel like the SC NUC could have been more with another inch of cooling height though. The GPU drops back to its base clocks quickly, having so many more EUs than the Iris Pro 5200 should have brought more gains than we saw since it's not bandwidth bound.
  • mcmillanit - Monday, August 29, 2016 - link

    I have a SC.
    I attached an nvidia gtx 750ti using the bplus v4.1 egpu adapter.
    When version 2 of the thunderbolt adapters become available, I will probably switch to one of those, but for now, m.2 to pcie adapter is ugly, but works, and works well. It is only using pcie1 x4, but that is sufficient.
    The user experience is vastly improved. The video is smooth, and without the thermal load of the GPU on the processor, the cpu stays much cooler while it maintains higher clock speeds when I game or watch youtube.

    -Michael
  • powerarmour - Monday, August 29, 2016 - link

    How about investigating where all the GPU reviews have gone?
  • Ranger1065 - Tuesday, August 30, 2016 - link

    Anandtech is so BROKEN....
  • Ian Cutress - Tuesday, August 30, 2016 - link

    We've hit every major CPU launch on day one with every CPU available at launch being tested for the last two years - over and above any other website coverage. So wait, broken?
  • JackNSally - Wednesday, August 31, 2016 - link

    CPU=/=GPU
  • zaza - Monday, August 29, 2016 - link

    intersting results. can you redo the test but using normal CPU socket. The eDRAM might have played a role keeping the bandwidth and latency in check, but to be sure we have to test the effect on normal socket. and is it possible to test also with DDR3? since some skylake motherboards do support DDR3 memeory
  • Ian Cutress - Monday, August 29, 2016 - link

    You mean what we posted last august? DDR4 vs DDR3L ?
    http://www.anandtech.com/show/9483/intel-skylake-r...
  • ganeshts - Monday, August 29, 2016 - link

    (1) Skull Canyon uses BGA, so it is not possible to move to a non-eDRAM processor on this testbed.

    (2) As I have explained in the article, the only platform we could source that supports memory OC AND supports SODIMMs was the Skull Canyon NUC. Our focus was on the large number of OC-ed DDR4 SODIMMs that have come to the market in the last few months.

    (3) Practical comparisons of DDR3 vs. DDR4 with real-life workloads is quite difficult because, in a real system, the boards would be completely different in terms of system configuration and there are too many factors involved that it wouldn't be possible to do an apples-to-apples comparison.
  • alacard - Monday, August 29, 2016 - link

    I always find these kinds of articles funny, especially coming from Anandtech. When you test an SSD you test multi-tasking performance (your destroyer benchmarks), but you don't bother to do so with memory, even tho like an SSD, multi-tasking performance it's the only metric that actually matters.

    Just like RAM, take 50 different SSDs and run application startup and game loading tests on them and you will get almost exactly the same results across the board, and THIS IS WHY YOU HAVE A MULTI-TASK BENCHAMRK, because without seeing how the SSD can handle varied workloads the results are MEANINGLESS because at a baseline of loading single applications, SSDs are practically all the same.

    It works the same with RAM. A user typically spends more time multi-tasking than running one thing, but you don't even bother testing multi-tasking performance on faster memory. What the hell is going on here? How many more of these useless articles are you guys going to keep churning out before you start actually investigating the true differences between RAM speeds and latency with meaningful benchmarks that will actually show the difference?
  • ganeshts - Monday, August 29, 2016 - link

    Previous memory scaling reviews that have been linked above by Ian show that various SINGLE application workloads can benefit immensely from memory frequency scaling. Our intention here was to show that this is NOT the case with the Skull Canyon NUC. The numbers also point to the effectiveness of the eDRAM as a cache for all the components of the processor, and not just the GPU. In that, I believe the review has provided a definitive answer to comments like these : http://www.anandtech.com/comments/10343/the-intel-... ; Many people expected to get better gaming bench numbers with higher frequency memory in the Skull Canyon NUC, and I hope this article was able to resolve their doubts and helped them in choosing the right memory for their system.

    Second, when it comes to multi-tasking - higher capacity memory will ensure that applications will not swap out and will be readily available for resumption. In our evaluation, all SODIMMs are 32GB in capacity, and that is not a factor. In addition, DRAM is not like a SSD where we have a controller trying to manage wear levelling and other similar tasks.

    Multi-tasking, when it comes to DRAM, is not a set of 'parallel accesses' that can benefit directly from faster memory. Any performance benefit that is obtained is when pressure on the caches causes evictions and the new data needs to be fetched in. I would imagine a proper large-sized real-life workload can cause a similar 'access trace' to the main memory (a full-length PCMark 8 workload would probably be the same as 7-Zip and mplayer active at the same time). In the Skull Canyon NUC, we also have the 128MB eDRAM to be rendered 'ineffective' - i.e, the applications need to even thrash that memory if they have to show better performance with the faster memories.

    For what it is worth, the Intel Memory Latency Checker tool has 'multi-tasking' tests in the sense that accesses are simulated from all cores simultaneously. We do have the numbers for those, but, since we believe they are not reflective of the type of workloads for the Skull Canyon NUC, we chose not to publish them. I can upload and link those numbers later tonight.
  • PetarNL - Monday, August 29, 2016 - link

    I suspect that the reason why you hadn't seen much benefit with higher DRAM bandwidths is the TDP limit on the iGPU. The situation might be different with the 65W model of the Iris Pro 580.

    The Skull Canyon Iris Pro 580 manages only a 10-15% boost over the iGPU in i7 5775c/r, despite having 50% more EU and a generational advantage. I would recommend that you redo this test once you get your hands on a i7 6785r based product.
  • Flying Aardvark - Wednesday, August 31, 2016 - link

    Yes and thanks for following up on that! Literally no one else has. I'm surprised you guys pay that close of attention to the comments. It's a shame that Intel didn't put a little more TLC into Skull Canyon's R&D phase, to ensure every ounce of performance could be pulled out of this chip. But limited to a mere 45watts for the CPU/IGP combined, I suppose this was a likely outcome.

    There's just so many possible bottlenecks with a tiny system with a low heat/power requirement. Intel may have tightened the noose around the noose around this one just a bit much. A few design tweaks and it could really soar. Looking forward to the Kabylake or Cannonlake update.
  • Senti - Monday, August 29, 2016 - link

    Just a note about how much memory progressed, including the worth of "premium" kits.

    The result of the same Intel Memory Latency Checker on my quite ancient i7-930 with overclocked to 1686MHz, no-name, no radiators and even mixed model (one set of chips made by Samsung, the other Hyundai) DDR3 memory:

    Latency: 43.8
    1:1 Reads-Writes BW: 29805.4.

    Yes, it's triple-channel, but it doesn't help latency at all and even BW difference isn't great from what I remember testing it in dual-channel mode.
  • evilspoons - Monday, August 29, 2016 - link

    At a cursory glance of the benchmarks (without doing statistical analysis on them, I mean) I'd say they tie so often it's irrelevant except occasionally the Patriot 2800 kit falls behind more than any of the other kits do. On the final page, I noticed it has the worst as-tested tRFC, tied-for-worst tRAS and tCL, and middle of the road everything else. Nothing to see here, move along!
  • mr. president - Tuesday, August 30, 2016 - link

    Any word on the performance cliff going from 1280x1024 to 1680x1050? Is that eDRAM in action or just different detail settings?

    1680x1050 is only around 30% more pixels. It's strange to see such non-linear scaling.
  • ganeshts - Tuesday, August 30, 2016 - link

    They have different detail settings - usually, higher the resolution, the more the details.

    Similar trends have been observed in other gaming PCs also.
  • FlyingAarvark - Wednesday, August 31, 2016 - link

    I'd have to disagree that the 128MB L4 is the reason the RAM doesn't matter. It's (45W) TDP starved foremost, once that's cleared up the RAM will come into play.

    While definitely a less than ideal setup being so power starved, I'm convinced to buy a Skull Canyon. Then wait for the 10nm update- things are getting interesting in nukeland.
    Just a great little machine. Especially now that the last number of years I've backed off FPS/graphically intensive gaming. You can only play those so many decades when you started with Wolfenstein 3D. For League of Legends and probably some Hearthstone this hits the spot.

    My nuke will be getting the cheapest RAM option that Crucial sells. :) I really hope Intel invests into these heavily, I'm convinced they're the future of PCs and I'd like to see AMD get into the NUC scene.
  • Dansolo - Friday, September 2, 2016 - link

    The CPU benchmark for Photoscan stage2 seems a little iffy... 2800MHz RAM doing a fair bit better than the 3067 that has better timings? I don't believe that - gotta be something wrong with the test.
  • oguignant - Wednesday, September 7, 2016 - link

    between hyperx impact ddr4-2400 and corsair vengeance ddr4-2666, which is better?
    no matter the price.

Log in

Don't have an account? Sign up now