In the meantime, Linux has (by default) an adaptable timer and will soon be fully tickless [1]. In other words there will be no fixed timer and the OS will calculate when the next wake-up should be scheduled and sleep until that time (or until an interrupt comes).
At the same time, PowerTOP [2] will nicely show you which programs or drivers are responsible for waking up the computer and estimate how much power each program is consuming.
Actually, while doing this is a good idea for individuals diagnosing their system, There does seem to be a systemic issue. I hope I didn't come across as flippant.
I run TLP, turn of my ethernet and wifi when not using them, and otherwise try to save power. Windows users do none of these things, yet they still seem to get okay battery life.
It does beg the question... if the problem isn't in the kernel but in userland, what is the issue, and can something be done?
I will gladly give money to someone who will work on figuring this out.
This is an interesting post. jQuery was fixed to use 13ms as the minimum animation interval a some time ago. This seems like a legit Crome bug to file as the interval should be more deterministic. Chrome shouldn't take a 1ms tick unless it really needs it.
I wonder how much javascript code uses setTimeout(x,0) to push code to the end of the run loop.
Initially, Chrome attempted to allow setTimeout()s under the 15ms or so that was standard across browsers, which led to it winning some benchmarks and some accusations of foul play. The intent was pure -- why artificially clamp JavaScript timers to a Windows quirk? -- but eventually Chrome was changed to make timers behave like in other browsers. It appears that the spec now says 4ms is the minimum.
I remember the Chrome timer code of years ago was careful to only adjust the interval when needed. From reading other bugs it looks like today's behavior is an accidental regression and will likely be fixed (until the next time it regresses).
> I remember the Chrome timer code of years ago was careful to only adjust the interval when needed. From reading other bugs it looks like today's behavior is an accidental regression and will likely be fixed (until the next time it regresses).
Indeed, although it seems the current behavior has been oustanding for some time:
The original justification for lowering the resolution is an interesting read:
At one point during our development, we were about to give up on using the high resolution timers, because they just seemed too scary. But then we discovered something. Using WinDbg to monitor Chrome, we discovered that every major multi-media browser plugin was already using this API. And this included Flash, Windows Media Player, and even QuickTime. Once we discovered this, we stopped worrying about Chrome's use of the API. After all – what percentage of the time is Flash open when your browser is open? I don't have an exact number, but it's a lot. And since this API effects the system globally, most browsers are already running in this mode.[1]
> It appears that the spec now says 4ms is the minimum.
I was playing with Windows timers a little while ago and I noticed that with IE11 open the timer interval sat at 15.6ms, occasionally changing to 4ms while the page was doing things. That was the first time I've heard of a program calling timeBeginPeriod without setting it to 1ms. I hope it catches on.
Any timer is constrained to the resolution set by the system timer. To go lower one would require something akin to a spin-loop, which defeats any gain by keeping the resolution high.
setTimeout(,0) is a special case. It means "make this function be ran after the current call stack clears". That requires neither a timer nor a spin loop.
Right, but that precedent was only set by incorrect implementations existing in the first place. Using setTimeout(,0) recursively in lieu of (a then unavailable) requestAnimationFrame or a non-zero timeout period is in my mind equivalent to while(true){}/infinite tail recursion.
Interesting. I had clockres on my machine but never bothered to learn what it does. I've used that in code that I wanted a better timer but ended up using QueryPerformanceCounter/Frequency and rolling my own timer class but that can be a bigger pain than just using the timer.
On my machine, I got similar settings and found chrome being the sole offender which is probably the worst offender in many ways. Firefox and IE were clean so Google is the outlier and given I always have Chrome browser open somewhere while SQL or devenv is not always open, I suppose that's suboptimal wonder if they will change it.
Macs have a similar issue. Unexpected programs activate the dedicated GPU. Skype and twitter used to do this. Power users run special utilities to force the dedicated GPU off, but the normal user has no idea that his Mac battery won't last.
In my PC the programs raising the resolution are gtalk, chrome and skype. I run visual studio and sql server but they don't show up in powercfg.
quartz.dll is from DirectShow. A multimedia component is expected to requiere more resolution. The fault is in the program calling into DirectShow.
Only if you have a discrete GPU, of course. The power management story on the 15" MBP is a bit of a shitshow; I expect that the Haswell updates will go integrated-only.
"Another common culprit on my machine is sqlservr.exe. I think this was installed by Visual Studio but I’m not sure. I’m not sure if it is being used or not."
Is this attitude still prevalent in the windows community? I thought things had improved on that front.
Its worth pointing out that "the highest frequency wins" is not an example of "tragedy of the commons."
Um, what? Of course there's a windows community, in every sense of the word. There are magazines, conferences, and forums for windows programming and windows programmers. There are trends, fads, and innovations.
.. and there are practices that are commonly found on windows programming that are beyond the pale in other environments (loading your app into memory every time the machine starts, installing malware during setup, etc).
I realize the phrase was a little awkward. As I was writing the comment I struggled with finding the appropriate phrase that would not sound diminutive. I thought awkward was preferable to diminutive.
It's fairly common among a lot of developers, sadly. Linux/OSS folks are less prone to it, but I've seen kitchen sink setups there as well.
An old colleague used to joke: "Developers should never have fast machines". Point being, they'll appreciate every spare CPU cycle and byte of memory available.
I don't think "5 small atomic bombs per year" is a particularly relatable example. It's about the average electricity consumption of 7500 US homes - that seems more concrete to me. If you can save the equivalent of switching off 7500 homes by fixing a bug in your software, that's a pretty big impact for one person to make.
I'm not sure whether you're agreeing with me or not, and in a way that's why people shouldn't just use large numbers and then act like they've said something meaningful.
Ratios matter, large numbers without a relevant basis for comparison on the other hand are just misleading. There are roughly 116 million households in the US, saving the energy of 7,500 is not a big change.
You're solving roughly 1 - 15,466 th of the problem. And that's assuming that all the savings could even be applied to the US, which they most certainly couldn't.
That's not a big impact. Chances are no-one - even if they were looking - could notice the figurative needle move on a change that small at all the power-stations serving the aggregated demand.
At the same time, PowerTOP [2] will nicely show you which programs or drivers are responsible for waking up the computer and estimate how much power each program is consuming.
[1] https://lwn.net/Articles/549580/ [2] https://01.org/powertop/