Power Consumption

On AMD’s official specifications for the Ryzen 9 6900HS, it lists the TDP as 35 W: the same specifications as the 6900HX, but at an optimized TDP. The HS means that it can only be used in AMD-approved and codesigned systems that can get the best out of the unit: i.e. it is an ultraportable premium device. That being said, laptop vendors can customize the actual final power limit as high as 80W, with the idea that because they are using an optimized voltage/frequency binned processor, the laptop design that can dissipate that much can extract more sustained performance from the processor, this usually translates into a higher all-core frequency.

For our ASUS Zephryus G14, the standard default power profile, known as ‘Performance’, is meant to conform to AMD’s Power Management Framework, i.e. scale from Energy Saving to Performance as required. In this mode, the system has a sustained 45 W power draw.

Performance: 45W

Loading up a render like POV-Ray, the system spikes the CPU package power to 83 W and 80ºC, before very quickly coming down to 45 W and a slowly rising temperature to equilibrium at 87ºC.

With something a bit more memory heavy, such as yCruncher, the same power profile is shown, this time with the power around about 81ºC for most of the test because it spends more time on memory access than raw throughput.

For a real-world scenario, Agisoft also spikes up very high initially, before reaching a plateau at 45 W and 90ºC.

Turbo: 65W

The other option on offer for this system is the ‘Turbo’ Mode, which jacks everything up to 65 W sustained.

This means we hit the peak temperature limits quite quickly, and the system ramps down over time to the 65 W average power.

The yCruncher profile is a bit more varied due to the CPU performance going further while the memory performance staying the same, but we still see temperatures in the mid 90s and power hovering more around 75 W.

Agisoft’s Turbo profile is all about being temperature limited in this case, and we still end up in the sustained parts of the test around that 65 W value.

If we were to look at how the power was distributed in each mode:

In performance mode, we see 16.0 watts when one core is loaded, going down to 5.2 watts per core when all cores are loaded and a frequency of 3775 MHz.

Compare that to the Turbo Mode:

The single-core data is the same, nothing changes there, but we’re now up to 7.2 watts per core when fully loaded, and a much higher frequency at 4050 MHz. But this means we’re using 17 watts more power (or 38% more power) for only 275 MHz (a 7% gain).

Looking at the frequencies in this format, you can see a slight difference in performance, but seemingly not that much to justify the power difference. Then again, I suspect Turbo is only really for when you are fully charged and plugged into mains power anyway.

For the following benchmarks, we’re going to be using both Performance and Turbo modes, but also I put the CPU in a 35W power mode. As the 1 core and 2 core loading is below this, it shouldn’t affect the single-core performance that much, but it might give us an understanding of where it compares to previous generations.

Core-to-Core, Cache Latency, Ramp Performance Per Watt
Comments Locked

92 Comments

View All Comments

  • DannyH246 - Wednesday, March 2, 2022 - link

    For a laugh.
  • Speedfriend - Wednesday, March 2, 2022 - link

    Seriously, how old are you?
  • abufrejoval - Friday, March 4, 2022 - link

    It's a slow season (for computers) so they have to spread it out some. The other pieces evidently have been prepared already as parting gifts by Ian.
  • vegemeister - Tuesday, March 1, 2022 - link

    >Per-Thread Power/Clock Control: Rather than being per core, each thread can carry requirements

    Does that imply the core can change its voltage and clocking on the same timescale as switching SMT thread? I thought modern SMT was fine-grained enough that there are instructions from both threads in-flight at once.

    Or is it just for simplifying the OS's cpufreq driver?

    >For example, if a core is idle for a few seconds, would it be better to put in a sleep state?

    A few hundred microseconds, surely?
  • Arnulf - Tuesday, March 1, 2022 - link

    "... following AMD’s cadence of naming its mobile processors after painters"

    As opposed to what, their desktop lineup naming (also named after painters)? Consumer processors are named after painters.
  • syxbit - Tuesday, March 1, 2022 - link

    >>While we haven’t touched battery life or graphics in this article

    that's pretty critical for a Laptop review.
    I'm pretty tired of Intel reviews constantly covering their 12th gen superiority without talking about power. It's easy to beat a competitor if you just double the power budget. It's laughable that Intel is pretending they've caught up to Apple.
  • Oxford Guy - Tuesday, March 1, 2022 - link

    I am sure those producing the Steam handheld would like reviewers to not test battery life.
  • ninjaquick - Tuesday, March 1, 2022 - link

    How fast do these chips perform vp9 4k decode? A major use case moving forward will be game streaming, and I'm struggling to find hardware acceleration numbers.
  • dwillmore - Tuesday, March 1, 2022 - link

    Error on page 3: "yCrundher" is a misspelling
  • YukaKun - Tuesday, March 1, 2022 - link

    Writing this from a 5900HX (Asus G17 Strix) and upgrading from a i7 7700HQ that, I have to say is really efficient for what it is, the AMD laptop is just in another league of its own. Both have a 90Wh battery and the Intel, not even new, would break the 4h mark. This thing has as much usage as my tablets with normal usage. It's really impressive and, for the go stuff, it's so SO nice. Then you need to game and it just works. The 6800M is quite the beast in its own right. Sad this thing doesn't have a mux switch, but it still works amazingly well.

    This preamble was just to say, I'm surprised the 6000HK isn't a lot better, but I guess it's to be expected. On paper, the 6000 mobile series has a lot of potential with PCIe4 and slightly better process. DDR5 is too new IMO to show a definitive advantage on mobile, but maybe next gen will leap. I have DDR4L 3200 with my 5900HX and I put DDR4L 2666 to the i7 7700HQ, so DDR5L needs to be way faster than the crappy 4800 MT/s JEDEC spec we have currently.

    Regards.

Log in

Don't have an account? Sign up now