NVIDIA Details Dynamic Boost Tech & Advanced Optimus (G-Sync & Optimus At Last)
by Ryan Smith on April 2, 2020 7:30 AM ESTAlongside this morning’s launch of their new laptop SKUs, NVIDIA is also rolling out a couple of new technologies aimed at high-end laptops. Being placed under their Max-Q banner, the company is unveiling new features to better manage laptop TDP allocations, and for the first time, the ability to have G-Sync in an Optimus-enabled laptop. These new technologies are separate from the new hardware SKUs being launched today – they can technically be built into any future GeForce laptop – so I wanted to touch upon separately from the hardware itself.
NVIDIA Dynamic Boost: Dynamic TDP Between GPU & CPU, For Intel & AMD
First off, we have what NVIDIA is calling their Dynamic Boost technology. This is essentially NVIDIA’s counterpart to AMD’s SmartShift technology, which was introduced in the recently-launched Ryzen Mobile 4000 APUs.
Like SmartShift, NVIDIA’s Dynamic Boost is designed to take advantage of the fact that in many laptop designs, the GPU and the CPU share a common thermal budget, typically because they are both cooled via the same set of heatpipes. In practice, this is usually done in order to allow OEMs to build relatively thin and light systems, where the cooling capacity of the system is more than the TDP either of the CPU or GPU alone, but less than the total TDP of those two processors together. This allows OEMs to design around different average, peak, and sustained workloads, offering plenty of headroom for peak performance while sacrificing some sustained performance in the name of lighter laptops.
In fact, a lot of these designs are potentially leaving some performance on the table, as far as peak performance is concerned. Because the thermal budget of a laptop is usually greater than any single processor, normal processor TDPs have them holding themselves back more than they otherwise would. So, as the thinking goes, if the two processors are sharing a common cooling system, why not raise their power limits and then them split up the thermal budget of the system in an intelligent manner?
And this is exactly what Dynamic Boost and similar technologies set out to do. By dynamically allocating power between the CPU and the GPU, a system should be able to eke out a bit more performance by making better-informed choices about where to allocate power. This could include, for example, allowing a CPU to go to a full 135W for a short period of time because the GPU is known to be idle, or borrowing some of the thermal budget from a lightly-loaded CPU and instead spending it on the GPU. Essentially it’s the next step in min-maxing the performance of laptops with discrete CPUs and GPUs by offering ever finer-grained control over how power and thermal budgets are allocated between the two processors.
Overall this isn’t a new concept, but until recently it’s only been used within a single CPU/APU to balance power between those two blocks. Extending it over multiple chips is a bit more work, and while beneficial, it’s really only become worth the effort as Moore’s Law has been slowing down.
NVIDIA’s solution, meanwhile, has the notable distinction of being the first generic solution that works across multiple platforms. Whereas AMD’s SmartShift is designed to work with the combination of AMD APUs and GPUs – fully leveraging the AMD ecosystem and their platform control fabric – NVIDIA as a common GPU supplier for both platforms needed to develop a solution that works with both. So Dynamic Boost can be used with both Intel Core processors and AMD Ryzen processors in a relatively generic manner, allowing OEMs to apply the technology regardless of whose CPU they use.
As for the performance benefits, while NVIDIA isn’t promising anything major, Dynamic Boost will none the less let them wring out a bit more performance out of their GPUs. Like AMD, the numbers being thrown around are generally less than 10%, reflecting the fact that most games are significantly taxing both the CPU and the GPU, but that’s four to eight percent more performance that would otherwise have been left on the table. Ultimately the big win here comes from taking advantage of the relative difference in voltage-frequency curves between the processors, as the highest speed bins are always the most expensive from a power perspective.
Past that, the hardware implementation details for the technology are pretty straightforward, but they do require vendor involvement – so this can’t be added to existing (in the field) laptops. Besides the immediate requirement of having a shared thermal system, laptops need to have the sensors/telemetry in place to keep a close eye on the state of the two processors and the thermal system itself. As well, OEMs need to build their laptops with greater-than-normal capacity VRM setups so that the processors can pull the extra power needed to clock higher (particularly the GPU). And even then, there is some system-level tuning required to account for the specific performance and cooling details of a given laptop design.
The upshot, at least, is that all of this OEM firmware tuning is a one-time affair, and Dynamic Boost is all but transparent at a software level. So no per-game profiling is necessary, and NVIDIA’s drivers can operate on a generic basis, adjusting power/thermal limits based on what it’s reading from the workload at-hand. According to NVIDIA, they are making their adjustments on a per-frame basis, so Dynamic Boost will only need a short period of time to adjust to any rapid shifts in the workload, depending on just how aggressive NVIDIA’s algorithms are.
Finally, it should be noted that Dynamic Boost is an optional feature, so it’s ultimately up to OEMs to decide whether they want to implement it or not. For marketing reasons, NVIDIA is bundling it under their Max-Q family of features, but just because a laptop is Max-Q doesn’t mean it can do Dynamic Boost. The flip side to that, however, is that if an OEM is going for Max-Q certification anyhow, then if they have a shared thermal solution then there’s everything to gain from baking in support for the technology in future laptop designs.
Advanced Optimus: G-Sync and Optimus Together At Last
Along with Dynamic Boost, NVIDIA’s other new laptop technology for today is what the company is calling Advanced Optimus. Exactly what it says on the tin, Advanced Optimus is a further enhancement of NVIDIA’s Optimus technology, finally allowing for G-Sync to be used with the power-saving technology.
As a quick refresher, Optimus is NVIDIA’s graphics switching technology. Originally introduced a decade ago, Optimus allows for both an iGPU and a dGPU to be used together in a single system and in a very efficient manner. This is accomplished by primarily relying on the iGPU, and then firing up the dGPU only when needed for intense workloads. The process as a whole is more involved than relying exclusively on either GPU – frames need to get passed around from the dGPU to the iGPU when it’s being used – but the power savings relative to an always-on dGPU are significant. As a result, almost every GeForce-equipped laptop shipped today uses Optimus, with the exception of a handful of high-end machines that are aiming to be portable desktops.
And while Optimus is all well and good, the fact that it essentially slaves the dGPU to the iGPU has always come with a rather specific drawback: it means the display controller used is always the iGPU’s display controller, and thus the laptop can only do whatever the iGPU’s display controller is capable of. Back in 2010 this wasn’t a meaningful problem, but the introduction of variable refresh rate technology changed this. Ignoring for the moment the proprietary nature of some of NVIDIA’s G-Sync implementations, even today Intel only supports variable refresh on its Gen11 GPUs, which are only found on its ultra-low voltage Ice Lake processors. As a result, it’s not possible to both use the iGPU of an H-class system and get variable refresh out of it at the same time.
The net impact has been that laptop adoption of variable refresh has been few and far between. While laptop-class variable refresh displays are available, systems implementing it have had to go old school, either using a pure dGPU setup and shutting off the iGPU entirely, or using a multiplexer (mux) to switch the display output between the iGPU and dGPU. The mux solution in turn certainly works, but it in practice it has required a (high-friction) reboot when switching modes. Or at least, it did until now.
For Advanced Optimus, NVIDIA has finally figured out how to have their cake and eat it too. In short, the company has brought back the idea of dynamic mux, allowing laptops to switch between the iGPU and the dGPU on the fly. As a result, the iGPU is no longer a limiting factor when it comes to variable refresh; if a system wants to switch to a variable refresh mode, it can just fire up the dGPU and then switch over to using that. All without a reboot.
By and large, Advanced Optimus does away with the most novel aspects of Optimus – passing frames from the dGPU to the iGPU – and instead becomes a much smarter muxing implementation. Which means that along with making variable refresh technology more accessible, it also gets rid of the Optimus latency penalty. Since buffers no longer need to be passed to the iGPU, that roughly 1 frame delay is eliminated.
Unfortunately, as far as technical details go, NVIDIA is holding this one quite close to their chest for the time being. NVIDIA has tried this idea once before in 2008 with their "gen2" switchable graphics, but in practice it wasn't a frictionless experience like it needed to be. Once Optimus came along, even dynamic muxing was quickly cast aside in favor of Optimus for most laptops, and manual muxing for the rest.
So at this point it’s unclear just how NVIDIA has solved the pitfalls of previous dynamic mux solutions – passing the entire Windows desktop to another GPU in real time is (still) no easy task – but the company is adamant that it is a truly seamless experience. Truthfully, I haven’t fully ruled out NVIDIA doing something incredibly crazy like baking a display controller into their mux – in essence having an external controller composite inputs from the two GPUs – but this is admittedly unlikely. More likely is that the company has borrowed some tips and tricks from their eGFX technology, which has to solve a similar problem with the added wrinkle of a dGPU that can be removed at any time. None the less, it will be interesting to crack open an Advanced Optimus laptop and see what makes it tick.
As for the software side of matters, Advanced Optimus behaves more or less like regular Optimus. That means checking applications against a list, and then switching accordingly. Crucially, windowed or full screen mode doesn’t matter, and Advanced Optimus mode works for each. So while this presumably comes with the same tradeoffs between windowed and exclusive fullscreen mode that we see in regular G-Sync operation, it none the less means that every option is on the table, just like a regular (non-Optimus) G-Sync setup.
Ultimately, NVIDIA’s goal is to get variable refresh/G-Sync support in a lot more laptops than it is today, as Advanced Optimus removes both the friction and the battery life penalties that have been encountered to date. To be sure, this is still an optional feature for OEMs, as it requires them to take the care to integrate both a variable refresh display as well as the necessary mux. But never the less it opens the door to putting G-Sync on thin & light laptops and other notebooks where a vendor would never be willing to accept a traditional mux solution. I do wonder if perhaps this is going to be a short-term solution – what happens when Intel launches the variable refresh-capable Tiger Lake-H in 2021? – but for now, NVIDIA is the only game in town for doing variable refresh on a laptop without making other compromises.
Finally, the first vendor out of the door will be Lenovo, whom is NVIDIA’s launch partner for the technology. The two companies have worked together to add the technology to Lenovo’s new Legion 5i and 7i laptops, which were also announced today and start with an RTX 2060 for $999. Unfortunately, Lenovo has not announced a release date for those laptops, so it sounds like we’ll be waiting just a bit longer before the first Advanced Optimus laptop hits the market.
13 Comments
View All Comments
FXi - Sunday, April 5, 2020 - link
Currently Gsync on a laptop means you cannot use an eGPU TB3 solution to drive the internal panel. Which you prefer, support of eGPU or better FPS syncing with the internal GPU is a buyer decision - no one answer fits all buyers.So I wonder does this new solution overcome that? Or just behave as before? eGPU's are getting quite a lot of attention from those buyers who are dual use based, portability for day to day and eGPU for use back at a docking area. Clearly Nvidia isn't really eager to let a lot of details out, but this is one thing they should make clear just to insure users are buying what they intend. As above, some will be more reliant on a robust eGPU solution and some will not.
nittikorncp - Monday, May 11, 2020 - link
Does Nvidia Optimus completely shut off the dGPU or does it just put it in some sort of idle state when not in use? Would an RTX2060 still pull more power than a GTX1650 if unused at all with Optimus on?Emre filz - Wednesday, May 19, 2021 - link
Hi, I read your articles in such good information, I get most of the information, Anyway, most of the people who use Nvidia a lot, I am one of them and I find Nvidia very useful in graphics and gaming.https://eazyarticles.com/