What are the implications of setting the CPU governor to "performance"?

17,071

Solution 1

For the record, the (up-to-date) cpufreq documentation is here.

What does "statically" mean?To me, it contrasts with "dynamic", and implies frequency would never change, i.e. with powersave the CPU frequency would always be a single value, equal to scaling_min_freq

You're right. Back in the old cpufreq driver days, there were two kinds of governors: dynamic ones and static ones. The difference was that dynamic governors (ondemand and conservative) could switch between CPU frequencies based on CPU utilization whereas static governors (performance and powersave) would never change the CPU frequency.
However, as you have noticed, with the new driver

this is clearly not the case.

This is because the new driver, which is called intel_pstate, operates differently. The p-states aka operation performance points involve active power management and race to idle which means scaling voltage and frequency. For more details see the official documentation.
As to your actual question,

What are the implications of setting the CPU governor to "performance" ?

it's also answered in the same document. As with all Skylake+ processors, the operating mode of your CPU is - by default - "Active Mode with HWP" so the implications of using the performance governor are (emphasize mine):

HWP + performance

In this configuration intel_pstate will write 0 to the processor’s Energy-Performance Preference (EPP) knob (if supported) or its Energy-Performance Bias (EPB) knob (otherwise), which means that the processor’s internal P-state selection logic is expected to focus entirely on performance,.

This will override the EPP/EPB setting coming from the sysfs interface (see Energy vs Performance Hints below).
Also, in this configuration the range of P-states available to the processor’s internal P-state selection logic is always restricted to the upper boundary (that is, the maximum P-state that the driver is allowed to use).


In a nutshell:
intel_pstate is actually a governor and a hardware driver all in one. It supports two policies:

  • the performance policy always picks the highest p-state: maximize the performance and then go back down to a virtual zero energy draw state, also called "Race to Idle"
  • the powersave policy attempts to balance performance with energy savings: it selects the appropriate p-state based on CPU utilization (load at this specific p-state, will probably go down when going to a higher p-state) and capacity (maximum performance in highest p-state)

Solution 2

In my personal experience, with every computer I've used, "powersave" allows for CPU frequency/voltage scaling and downscales the CPUs at idle by default, whereas "performance" only uses CPU frequency/voltage scaling when it needs to, e.g., when the processor's operation exceeds the thermal envelope.

I also have a question of whether disabling frequency scaling also has the same impact as changing the governor from "powersave" to "performance" mode, or whether those governors also change some additional logic parameters other than scaling that would also affect performance. For example, it would seem that under "performance" governor, "minimum" frequency threshold is ignored, and the CPU automatically jumps to highest available frequency within its thermal envelope. So it would appear that there is no functional difference between changing the frequency governor and simply setting "min/max frequency" to its highest values.

To test this, I installed HardInfo 0.6-alpha and ran through multiple instances of all performance tests using each of the three following settings:

1. Governor: Performance, max and min cpu frequency both set at max
2. Governor: Performance, max cpu frequency at max, min frequency at min
3. Governor: Powersave, max and min cpu frequency both set at max

I could not see any consistent deviations in performance between these three settings outside the margin of error. Someone with more experience testing might be able to do a more thorough job testing for differences between these performance settings. But for practical purposes, they appear to be analogous. So then the main issue becomes streamlining CPU control UI to eliminate any unnecessary complexity. Changing the pstate governor from powersave to performance appears redundant to simply locking the min/max frequencies together.

For specialized computing systems, it is easier just to leave a CPU at "performance" governor to match its specialized workload, but for multi-use workstations, it may make more sense to leave the governor on powersave and rather adjust performance through the min/max settings to give users more fine-grained control over specific performance scenarios.

Solution 3

I'm not sure what page you're reading, but the page CPU frequency scaling on wiki.archlinux mentions that:

Since kernel 3.4 the necessary modules are loaded automatically and the recommended ondemand governor is enabled by default.

The ondemand governor increases the CPU speed when there's enough load on the system to benefit from an increased speed, i.e. there's something running on the CPU for a full time slot.

Check the governor you're running to see if it's indeed ondemand, (/sys/devices/system/cpu/cpufreq/policyN/scaling_governor) there's probably no need to change it from that default.

performance and powersave indeed seem to set the frequency directly to the maximum and minimum (respectively), and will not change it depending on the load.


Except that the wiki page also mentions that performance takes the role of ondemand on Sandy Bridge systems and later. It's also the default on those machines. So, come to think of it, if your system is new enough, you might be seeing that in action. Check the link to an article discussing this on the wiki.

Share:
17,071
Sparhawk
Author by

Sparhawk

I like algae.

Updated on September 18, 2022

Comments

  • Sparhawk
    Sparhawk over 1 year

    I recently read that I can eke more performance out of my CPU by setting the governor to "performance" instead of "powersave". According to the Arch wiki, this will "run the CPU at the maximum frequency" instead of the "minimum frequency".

    I found this wording confusing, so I also read the kernel documentation.

    2.1 Performance

    The CPUfreq governor "performance" sets the CPU statically to the highest frequency within the borders of scaling_min_freq and scaling_max_freq.

    2.2 Powersave

    The CPUfreq governor "powersave" sets the CPU statically to the lowest frequency within the borders of scaling_min_freq and scaling_max_freq.

    What does "statically" mean? To me, it contrasts with "dynamic", and implies frequency would never change, i.e. with powersave the CPU frequency would always be a single value, equal to scaling_min_freq. However, this is clearly not the case. I am currently running "powersave" by default. I can monitor the CPU frequencies with

    $ watch grep \"cpu MHz\" /proc/cpuinfo
    

    and see them changing dynamically.

    What does the kernel documentation mean by "statically"? What factors affect the CPU frequency, and how do these change with "powersave" and "performance"? Hence, what are the implications of changing from the former to the latter? Would a higher frequency be used? During what circumstances? Specifically, will this affect power draw, heat and lifespan of my CPU?

  • Sparhawk
    Sparhawk about 6 years
    I had actually already linked to that same wiki page in my question, but I've edited it to make it clearer. I'm using the default for me, which is powersave, but I've also edited my question to clarify. Sorry, I should have mentioned that.
  • ilkkachu
    ilkkachu about 6 years
    @Sparhawk, ah, I missed the link (the link colors here on unix.se are annoyingly inconspicuous). What processor do you have? Can it be that behaviour with the intel_pstate driver where powersave and performance both do adjust the clock speed? (unlike what the kernel docs seem to say)
  • Sparhawk
    Sparhawk about 6 years
    No worries. I have an Intel i7-8700K. I think we both interpret "static" the same, but this interpretation is at odds to what the kernel page intends. From the linked page in the question: the cpufreq governor decides (dynamically or statically) what target_freq to set within the limits of policy->{min,max}. This appears to imply that "static" can indeed change the frequency.
  • Sparhawk
    Sparhawk about 6 years
    Thank you for the excellent answer (+1). However, I'm still struggling a little to understand. The documentation seems to suggest it's a bit of a black box, i.e. send the processor one of two hints, and let it interpret that however it wants. However, in practice, I would have thought that the decision to increase the frequency (P-state) would occur only when load > cores. What else is there to base the decision on? Perhaps lag time before lowering P-state again? But I would have thought a processor would be capable of fairly rapid changes anyway.
  • don_crissti
    don_crissti about 6 years
    The fact is the inner logic of the new generations of CPUs is extremely complex and as you said, it's not quite transparent. From one of the Kristen Accardi's presentations: "The most efficient frequency is calculated based on temperature, race-to-idle information, HW counters to evaluate benefit. This gives Pe = most efficient frequency. The OS provides a Pa value = how aggressive it should be. The algorithm operates between Pe and Pa."
  • Sparhawk
    Sparhawk about 6 years
    Yes, okay. The main motivation in creating this question was the final part of my question, i.e. do I want to use this mode. However, the effects seem too complicated to easily know!
  • don_crissti
    don_crissti about 6 years
    @Sparhawk - what exactly is unclear ? You don't want to use the performance mode unless you need to do something as fast as possible. In everyday usage you will hardly notice any difference anyway.
  • Sparhawk
    Sparhawk about 6 years
    What is unclear to me is why anyone wouldn't use performance mode. I'm on a desktop, so battery is irrelevant. The processor might run hotter, but for a shorter time, so it may well even out. Thanks for the link; that is very informative!
  • Sparhawk
    Sparhawk about 6 years
    Okay, so I thought I should actually test it empirically. When idling with powersave, a few cores run at 800 MHz, and most (~8 of 12) cores are < 3000 MHz. With performance, the difference is drastic. All cores are > 3000 MHz, and most are > 4000 MHz. I'm not sure how to measure power draw, but there's no obvious change in temperature.
  • Mehdi
    Mehdi over 4 years
    Will this affect power draw, heat and lifespan of my CPU? This was not answered. Does powersave draw less power when idle or not? I see no difference on my 4300u, both modes draw 3 Watts when idle so even more confusing.