It has been five years since we did a benchmark of web browsers effect on battery life and a lot has changed. Back then, our testing included Opera 9 & 10, Chrome 2, Firefox 3.5.2, Safari 4, and IE8. Just looking at those version numbers is nostalgic. Not only have the browsers gone through many revisions since then, but computer hardware and the Windows operating system are very different as well.

Windows Timers: Computer Architecture & Google Chrome

Before we get to any testing of battery life, we need to provide some background information on some of the changes, which requires a deeper understanding of how operating systems and hardware interface with each other. If you've browsed tech news recently, there has been coverage of a Google Chrome design decision from 2010. To recap, Google Chrome on Windows requests the operating system use a 1ms timer in an effort to increase web page rendering speed. Faster is better, but there is a problem with this technique.

For those unfamiliar with OS timers, they form a core component of any operating system. There are two fundamentally different ways to handle timing in a computer system, polling and interrupt modes. A polling system consists of software and hardware that continuously checks to see if something of interest has happened. For example, if a driver sets up a hardware device (i.e. a sound card) to acquire input and then continuously reads memory to check for new values, this is a polling system. However, if the driver sets up the device to acquire and then waits for interrupts (hardware notifications) that new data is available in memory, this is an interrupt system.

In general, interrupt mode is preferred as it saves significant resources and allows other threads to work while the corresponding thread sleeps. The vast majority of API calls a software developer has available do not even provide timing mode selection and simply use interrupt mode. Otherwise, a single application using polling could easily eat up an entire CPU core. There are other factors as well, like preemption, but they are out of scope of this article.

There's a problem with interrupt mode, however: it is slower for a variety of reasons. First, there is interrupt latency. Compared to polling mode checking for something to happen over and over, interrupts are always going to lose.

As an example, if I watch someone building a piece of furniture the entire time, I know exactly when they finish and I can use the furniture. On the other hand, if I wait for the builder to tell me, I could do other things in the meantime (work, sleep, play games, etc.). Of course I wouldn't know exactly when the furniture is ready and there would be a delay between when the furniture is complete and when I first begin using it.

General purpose operating systems like Windows are not typically concerned with interrupt latency. This is more important in embedded mechanical controls like those in your car. But there's another reason interrupts are slower than polling: timer coalescing.

To save power, Microsoft uses timer coalescing in Windows. Applications and drivers waiting for an event can specify their timeout in milliseconds, but in most cases the request time will be rounded to a multiple of Window's 15.6ms default timebase (prior to Windows 8, more on that later). For example, if I wrote some code that waits for an event for 200 milliseconds, Windows might actually sleep my thread for 202.8ms. When two threads request timers, this technique helps Windows continue to wake up at roughly 64 times per second instead of twice as much, 128 times per second.

The results of timer coalescing is that a web browser or other application could theoretically wait up to 15.6ms even when it requests to be scheduled in 1ms interrupts. When push comes to shove, some applications bypass the regular timer mechanism and ask Windows to use a 1ms timer, circumventing these delays. This handicaps CPU and OS power saving features because the longest the hardware and operating system can ever sleep is 1ms. Factoring in thread work time and wake time, sleep duration is likely much less than 1ms.

The power penalty of applications requesting a 1ms timer is exacerbated in Windows 8. In Windows 8, Microsoft maintains the same timer coalescing and timer request API calls, but they have implemented a tick-less kernel under the hood. With a tick-less kernel, the operating system doesn’t just try to round sleep times to a 'default timer resolution' of 15.6ms like Windows 7 and prior did, but instead at every wake event, Windows 8 analyzes the upcoming timed events and intelligently schedules its next wake up time. Therefore, sleep times could be either shorter or much longer than the previous default of 15.6ms. Depending on the distribution of wake up times, this could save significant power. Microsoft provided a blog post with some detail and data regarding the move.

Microsoft did not provide detail on what applications were running when they performed this test. However, we can tell from the data that Chrome was not running, otherwise the only values in the distribution would be at 1ms and below. And that's the crux of the problem.

When Intel launched Haswell one of the focus points was idle power consumption. The theory is when you’re staring at a static screen reading an article or the device is ‘locked’, you can save significant battery life. Consider the following charts of power use for a desktop system:

Idle Power
The idle power of even a desktop Haswell is significantly better than Ivy Bridge

Load Power - x264 HD 5.0.1 Benchmark
But the load power is the same or worse

Intel spent many years designing Haswell as an improvement over Ivy Bridge for power consumption, but they are at the mercy of application developers. If an application wakes the device up for work every 1ms, the idle power benefits of Haswell don’t have nearly the impact they could.

The developers of Chrome are not ignorant to this and optimized Chrome to turn off its 1ms timer request if the system is running on battery power. However, this optimization is not functional and Chrome unfortunately always requests a 1ms timer. Other developers have criticized this timer technique in Chrome, pointing out that other browsers and high performance applications do not follow this same design pattern and that relying on precise interrupts from a general purpose OS is fundamentally a flawed design. They have likened it to old video games that changed speed with the MHz of the CPU.

By way of comparison, IE and Firefox use a frame rate limiting technique, where they use the default timer resolution of 15.6ms but if a website requests a 1ms refresh rate, they check the system clock after waking up and compute how many iterations to perform to achieve a virtual 1ms update rate. For example, if the browser wakes up after 13ms, then perform 13 iterations before sleeping again. The result is that they do more work less often.

The Test
Comments Locked

112 Comments

View All Comments

  • leminlyme - Tuesday, September 2, 2014 - link

    Well it uses Chromium, and is falsely reported as Chrome by many statistic pulling websites. I know this from experience, I'm always labeled Chrome when I exclusively use Next and 20+ versions of Opera. Opera performs almost exactly like Chrome except that it has different design features, and it was the pioneer of speed dial, something I give it a tonne of credit for as it is my single most used and useful browser feature ever.
  • Samus - Wednesday, August 13, 2014 - link

    The majority of Opera users are in Europe, and that is a statistically insignificant region of the world.

    USA F-Yeah!
  • StevoLincolnite - Wednesday, August 13, 2014 - link

    Europe has over twice the population of the USA, so how does that work?
  • MamiyaOtaru - Wednesday, August 13, 2014 - link

    you don't get jokes do you
  • seapeople - Wednesday, August 13, 2014 - link

    He's probably from Europe. They don't have enough of a population density there for jokes to take hold.
  • Spunjji - Thursday, August 14, 2014 - link

    As a resident of Europe, I resent tha...

    Ohhhh, I see what you did there
  • CharonPDX - Tuesday, August 12, 2014 - link

    While that is true, since they chose Safari ON WINDOWS, they should take in to account Safari+Windows market share. I'm sure it is miniscule, since Apple hasn't updated Safari for Windows in years. (The latest version is from 2012, and has received *NO SECURITY UPDATES* since.) They are testing Safari 5.1.7; Safari on OS X is up to version 7.0.5. It would be like them testing IE 9. While still technically the "current" version on Windows Vista (much as Safari 5.1.7 is the "current" version on Windows,) it's still very much not a competitor.

    Again, if they were going to test Safari, they should have tested it on OS X. Period.

    A proper comparison would be a MacBook Pro, booted natively to OS X 10.9.4 and Windows 8.1, testing each latest browser version. That would show difference between Firefox and Chrome on Windows vs. OS X that would be a baseline for comparing Safari and IE, as well.
  • Morawka - Wednesday, August 13, 2014 - link

    safari on mac has so many jerry rigged optimizations it would be totally unfair to compare it to windows based web browsers which have to be used on a huge variety of hardware, unlike safari on ios and osx.
  • xype - Wednesday, August 13, 2014 - link

    Ugh, do you seriously think IE and Chrome don’t have optimizations? Do you think the IE team be all "Well, it has to run on at least TWO Windows versions, let’s just keep it all unoptimized, yeah?"
  • mtbogre - Monday, August 25, 2014 - link

    They aren't "Jerry Rigged" optimizations any more than Windows using GL to render IE are "Jerry Rigged". The browsers are optimized for the underlying OS, that's a good thing.

    The bigger point here is that Safari on Windows is relevant to almost no-one. They might as well have tested out the AOL browser or thrown Netscape Navigator into the mix.

Log in

Don't have an account? Sign up now