Commit graph

25031 commits

Author SHA1 Message Date
Phapoom Saksri
ee92f8084e forgor 2025-02-28 02:41:43 +07:00
Phapoom Saksri
7df5f247bc susfs v1.5.5 2025-02-28 02:38:17 +07:00
Phapoom Saksri
3c119d076a get_cred_rcu backport. (Thanks @backslashxx for introduction) 2025-01-25 18:37:46 +07:00
Phapoom Saksri
57cbd42092 MindPatched v2.5 2025-01-25 18:19:22 +07:00
Phapoom Saksri
4311834fdc susfs4ksu 2025-01-14 00:49:01 +07:00
Phapoom Saksri
717afbe056 Repatch 2025-01-14 00:06:57 +07:00
Phapoom Saksri
157e80e925 KernelSU-Next + SuSFS 1.5.3 Patched 2024-12-28 20:42:59 +07:00
xxmustafacooTR
71e430b7c9
gaming_control, mali: more controls, optimizations 2024-09-27 17:20:00 +03:00
Diep Quynh
75f9b626ee
gaming_control: Remove task kill check
In case task_struct was locked out, we can't check its cmdline
If we do, we'll get a soft lockup

Background and top-app check is enough

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:20:00 +03:00
Diep Quynh
609163c324
drivers: Introduce brand new kernel gaming mode
How to trigger gaming mode? Just open a game that is supported in games list
How to exit? Kill the game from recents, or simply back to homescreen

What does this gaming mode do?
- It limits big cluster maximum frequency to 2,0GHz, and little cluster
  to values matching GPU frequencies as below:
+ 338MHz: 455MHz
+ 385MHz and above: 1053MHz
- As for the cluster freq limits, it overcomes heating issue while playing
  heavy games, as well as saves battery juice

Big thanks to [kerneltoast] for the following commits on his wahoo kernel
5ac1e81d3d
e13e2c4554

Gaming control's idea was based on these

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:19:55 +03:00
Slawek
3f3d5aeacd
cpufreq: schedutilX: Introduce initial bringup
Import from linux 4.9 android-common plus additional changes

hardcode up and down rates
2024-09-27 17:19:07 +03:00
THEBOSS619
366405377c
exynos9810: schedutil/KAIR: Sync schedutil to latest patches from Samsung
Taken from Note 10 5G & S10 [Exynos 9825/9820] and fully merged with KAIR feature as they quote "AI based Resource Control"
2024-09-27 17:19:06 +03:00
Viresh Kumar
76f96b92f5
cpufreq: Return 0 from ->fast_switch() on errors
CPUFREQ_ENTRY_INVALID is a special symbol which is used to specify that
an entry in the cpufreq table is invalid. But using it outside of the
scope of the cpufreq table looks a bit incorrect.

We can represent an invalid frequency by writing it as 0 instead if we
need. Note that it is already done that way for the return value of the
->get() callback.

Lets do the same for ->fast_switch() and not use CPUFREQ_ENTRY_INVALID
outside of the scope of cpufreq table.

Also update the comment over cpufreq_driver_fast_switch() to clearly
mention what this returns.

None of the drivers return CPUFREQ_ENTRY_INVALID as of now from
->fast_switch() callback and so we don't need to update any of those.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-09-27 17:17:50 +03:00
THEBOSS619
3c068a19a3
cpufreq_schedutil: account for rt utilizations too 2024-09-27 17:17:50 +03:00
Chris Redpath
cfab345701
ANDROID: sched/rt: Add schedtune accounting to rt task enqueue/dequeue
rt tasks are currently not eligible for schedtune boosting. Make it so
by adding enqueue/dequeue hooks.

For rt tasks, schedtune only acts as a frequency boosting framework, it
has no impact on placement decisions and the prefer_idle attribute is
not used.

Also prepare schedutil use of boosted util for rt task boosting

With this change, schedtune accounting will include rt class tasks,
however boosting currently only applies to the utilization provided by
fair class tasks. Sum up the tracked CPU utilization applying boost to
the aggregate util instead - this includes RT task util in the boosting
if any tasks are runnable.

Scenario 1, considering one CPU:
1x rt task running, util 250, boost 0
1x cfs task runnable, util 250, boost 50
 previous util=250+(50pct_boosted_250) = 887
 new      util=50_pct_boosted_500      = 762

Scenario 2, considering one CPU:
1x rt task running, util 250, boost 50
1x cfs task runnable, util 250, boost 0
 previous util=250+250                 = 500
 new      util=50_pct_boosted_500      = 762

Scenario 3, considering one CPU:
1x rt task running, util 250, boost 50
1x cfs task runnable, util 250, boost 50
 previous util=250+(50pct_boosted_250) = 887
 new      util=50_pct_boosted_500      = 762

Scenario 4:
1x rt task running, util 250, boost 50
 previous util=250                 = 250
 new      util=50_pct_boosted_250  = 637

Change-Id: Ie287cbd0692468525095b5024db9faac8b2f4878
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2024-09-27 17:17:50 +03:00
Diep Quynh
fbebe03e9d
sched: ems: Take current capacity into account when choosing prefer_idle CPUs
We want the CPUs to stay at the lowest OPP as possible, and avoid excessive
task packing, as prefer_idle tasks are latency-sensitive tasks

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:50 +03:00
Miguel de Dios
10116bf572
sched: core: Disable double lock/unlock balance in move_queued_task()
CONFIG_LOCK_STAT shows warnings in move_queued_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.

Bug: 123720375
Change-Id: I8bff8550c4f79ca535556f6ec626f17ff5fce637
Signed-off-by: Miguel de Dios <migueldedios@google.com>
2024-09-27 17:17:50 +03:00
Diep Quynh
665c080acf
sched: ems: Update EMS to latest beyond2lte source
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Leo Yan
cffbc6b5ce
cpufreq: schedutil: clamp util to CPU maximum capacity
The code is to get the CPU util by accumulate different scheduling
classes and when the total util value is larger than CPU capacity
then it clamps util to CPU maximum capacity. So we can get correct util
value when use PELT signal but if with WALT signal it misses to clamp
util value.

On the other hand, WALT doesn't accumulate different class utilization
but it needs to applying boost margin for WALT signal the CPU util
value is possible to be larger than CPU capacity; so this patch is to
always clamp util to CPU maximum capacity.

Change-Id: I05481ddbf20246bb9be15b6bd21b6ec039015ea8
Signed-off-by: Leo Yan <leo.yan@linaro.org>
2024-09-27 17:17:49 +03:00
Chris Redpath
082d262afe
ANDROID: Move schedtune en/dequeue before schedutil update triggers
CPU rq util updates happen when rq signals are updated as part of
enqueue and dequeue operations. Doing these updates triggers a call to
the registered util update handler, which takes schedtune boosting
into account. Enqueueing the task in the correct schedtune group after
this happens means that we will potentially not see the boost for an
entire throttle period.

Move the enqueue/dequeue operations for schedtune before the signal
updates which can trigger OPP changes.

Change-Id: I4236e6b194bc5daad32ff33067d4be1987996780
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
506a283fc1
sched: ems: energy: Fix energy CPU selection algorithm
- Consider CPU idleness to avoid excessive task packing, which hurts
overall performance
- Use cpu_util_wake() instead of cpu_util(), fixing miscalculation
that causes the selection code thinking that the CPU is overutilized
- Mark CPU utilization with task util accounted as least utilized
CPU

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
8a2aace408
sched: ems: service: Fix service CPU selection algorithm
1. Consider boosted task utilization when choosing/considering
maximum spare capacity
2. Take advantage of CPU cstates
3. Account of task utilization to avoid overutilizing a CPU

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
18a9c42851
sched: ems: ontime: Consider same exit latency targets on selecting CPUs
Additional condition for CPUs in the same cluster

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
e669bacd88
sched: ems: Consider idle states when choosing prefer_idle CPUs
Shallowest idle CPU has the lowest wake-up latency, therefore it should
have higher priority of being chosen

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
1ca34dc26d
sched: ems: ontime: Honor sync only when target CPU is about to idle
Avoid excessive task packing, which leads to latency due to sync

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
3752370f61
sched: ems: energy: Account boosted task util on CPU selection path
Boosted tasks should be placed on bigger cluster

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Vincent Guittot
65805a1524
sched/util_est: Fix util_est_dequeue() for throttled cfs_rq
When a cfs_rq is throttled, parent cfs_rq->nr_running is decreased and
everything happens at cfs_rq level. Currently util_est stays unchanged
in such case and it keeps accounting the utilization of throttled tasks.
This can somewhat make sense as we don't dequeue tasks but only throttled
cfs_rq.

If a task of another group is enqueued/dequeued and root cfs_rq becomes
idle during the dequeue, util_est will be cleared whereas it was
accounting util_est of throttled tasks before. So the behavior of util_est
is not always the same regarding throttled tasks and depends of side
activity. Furthermore, util_est will not be updated when the cfs_rq is
unthrottled as everything happens at cfs_rq level. Main results is that
util_est will stay null whereas we now have running tasks. We have to wait
for the next dequeue/enqueue of the previously throttled tasks to get an
up to date util_est.

Remove the assumption that cfs_rq's estimated utilization of a CPU is 0
if there is no running task so the util_est of a task remains until the
latter is dequeued even if its cfs_rq has been throttled.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
Link: http://lkml.kernel.org/r/1528972380-16268-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-09-27 17:17:49 +03:00
Quentin Perret
3493b7b66f
FROMLIST: sched/topology: Lowest CPU asymmetry sched_domain level pointer
Add another member to the family of per-cpu sched_domain shortcut
pointers. This one, sd_asym_cpucapacity, points to the lowest level
at which the SD_ASYM_CPUCAPACITY flag is set. While at it, rename the
sd_asym shortcut to sd_asym_packing to avoid confusions.

Generally speaking, the largest opportunity to save energy via
scheduling comes from a smarter exploitation of heterogeneous platforms
(i.e. big.LITTLE). Consequently, the sd_asym_cpucapacity shortcut will
be used at first as the lowest domain where Energy-Aware Scheduling
(EAS) should be applied. For example, it is possible to apply EAS within
a socket on a multi-socket system, as long as each socket has an
asymmetric topology. Energy-aware cross-sockets wake-up balancing can
only happen if this_cpu and prev_cpu are in different sockets.

Change-Id: Ie777a1733991d40ce063b318e915199ba3c5416a
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
Message-Id: <20181016101513.26919-7-quentin.perret@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>

[diepquynh]: Backport to 4.4 EAS
2024-09-27 17:17:49 +03:00
Morten Rasmussen
0279db1f0d
UPSTREAM: sched/topology: Add SD_ASYM_CPUCAPACITY flag detection
The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
sched_domain in the hierarchy where all CPU capacities are visible for
any CPU's point of view on asymmetric CPU capacity systems. The
scheduler can then take to take capacity asymmetry into account when
balancing at this level. It also serves as an indicator for how wide
task placement heuristics have to search to consider all available CPU
capacities as asymmetric systems might often appear symmetric at
smallest level(s) of the sched_domain hierarchy.

The flag has been around for while but so far only been set by
out-of-tree code in Android kernels. One solution is to let each
architecture provide the flag through a custom sched_domain topology
array and associated mask and flag functions. However,
SD_ASYM_CPUCAPACITY is special in the sense that it depends on the
capacity and presence of all CPUs in the system, i.e. when hotplugging
all CPUs out except those with one particular CPU capacity the flag
should disappear even if the sched_domains don't collapse. Similarly,
the flag is affected by cpusets where load-balancing is turned off.
Detecting when the flags should be set therefore depends not only on
topology information but also the cpuset configuration and hotplug
state. The arch code doesn't have easy access to the cpuset
configuration.

Instead, this patch implements the flag detection in generic code where
cpusets and hotplug state is already taken care of. All the arch is
responsible for is to implement arch_scale_cpu_capacity() and force a
full rebuild of the sched_domain hierarchy if capacities are updated,
e.g. later in the boot process when cpufreq has initialized.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-2-git-send-email-morten.rasmussen@arm.com
[ Fixed 'CPU' capitalization. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 05484e0984487d42e97c417cbb0697fa9d16e7e9)
Signed-off-by: Quentin Perret <quentin.perret@arm.com>

Change-Id: I1d5f695a95f8d023f1ecf14ecb71a558ceb67ed6

[diepquynh]: Backport to 4.4 EAS
2024-09-27 17:17:49 +03:00
Sultan Alsawaf
651b5e6e02
kernel: Add API to mark IRQs and kthreads as performance critical
On devices with a CPU that contains heterogeneous cores (e.g.,
big.LITTLE), it can be beneficial to place some performance-critical
IRQs and kthreads onto the performance CPU cluster in order to improve
performance.

This commit adds the following APIs:
-kthread_run_perf_critical() to create and start a perf-critical kthread
-irq_set_perf_affinity() to mark an active IRQ as perf-critical
-IRQF_PERF_CRITICAL to schedule an IRQ and any threads it may have onto
 performance CPUs
-PF_PERF_CRITICAL to mark a process (mainly a kthread) as performance
 critical (this is used by kthread_run_perf_critical())

In order to accommodate this new API, the following changes are made:
-Performance-critical IRQs are distributed evenly among online CPUs
 available in cpu_perf_mask
-Performance-critical IRQs have their affinities reaffined upon exit
 from suspend (since the affinities are broken when non-boot CPUs are
 disabled)
-Performance-critical IRQs and their threads have their affinities reset
 upon entering suspend, so that upon immediate suspend exit (when only
 the boot CPU is online), interrupts can be processed and interrupt
 threads can be scheduled onto an online CPU (otherwise we'd hit a
 kernel BUG)
-__set_cpus_allowed_ptr() is modified to enforce a performance-critical
 kthread's affinity
-Perf-critical IRQs are marked with IRQD_AFFINITY_MANAGED so userspace
 can't mess with their affinity

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-09-27 17:17:49 +03:00
Sultan Alsawaf
e629059826
cpumask: Add cpumasks for big and LITTLE CPU clusters
Add cpu_lp_mask and cpu_perf_mask to represent the CPUs that belong to
each cluster in a dual-cluster, heterogeneous system.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2024-09-27 17:17:48 +03:00
Runmin Wang
af7d0b5d01
genirq: Add IRQ_AFFINITY_MANAGED flag
Add IRQ_AFFINITY_MANAGED flag and related kernel APIs so that
kernel driver can modify an irq's status in such a way that
user space affinity change will be ignored. Kernel space's
affinity setting will not be changed.

Change-Id: Ib2d5ea651263bff4317562af69079ad950c9e71e
Signed-off-by: Runmin Wang <runminw@codeaurora.org>
2024-09-27 17:17:48 +03:00
Diep Quynh
4c124fbe27
sched: Make cpu_util() public for mpsd
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
Diep Quynh
df1f9e983f
sched: ems: Adapt Samsung's new prefer_perf request functions
This fixes usage of multiple similar kpp calls race from other places

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
Diep Quynh
0b7bbc5efc
sched: ems: Kill sync in energy CPU selection path
We have upstream sync covered. There's no need for this redundant one

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
Diep Quynh
1844de20e5
sched: ems: Use current upstream CPU sync condition
It's much more stable than what given in EMS

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
John Dias
1cbb7f7ac3
sched: fair: avoid little cpus due to sync, prev_bias
Important threads can get forced to little cpu's
when the sync or prev_bias hints are followed
blindly. This patch adds a check to see whether
those paths are forcing the task to a cpu that
has less capacity than other cpu's available for
the task. If so, we ignore the sync and prev_bias
and allow the scheduler to make a free decision.

Bug: 110714207
Change-Id: Ie38061cd78a4e19c8783ffd460cc81caebfe9c75
Signed-off-by: John Dias <joaodias@google.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
2024-09-27 17:17:48 +03:00
Todd Kjos
1f8ac0debc
ANDROID: sched/fair: if sync flag ignored, try to place in same cluster
If the sync flag is ignored because the current cpu is not in
the affinity mask for the target of a sync wakeup (usually
binder call), prefer to place in the same cluster if
possible. The main case is a "top-app" task waking a
"foreground" task when the top-app task is running on
a CPU that is not in the foreground cpuset. This patch
causes the "top-app" search order be used (big cpus first)
even if schedtune.boost is 0 when the sync flag failed.

Bug: 110464019
Change-Id: Ib026231cee5c8b9d616d4ad4b92dd8502e79d0c0
Signed-off-by: Todd Kjos <tkjos@google.com>
Keep changes from 9f7516a28d7ca45d2d69352d70f4df4365041dd4.
2024-09-27 17:17:48 +03:00
Rick Yiu
ceec3107b8
sched/fair: honor sync only if CPU is about to goto idle
sync is causing excessive latencies during binder replies as its causing
migration of important tasks to busy CPU. Incase the CPU has a lot of
tasks running, prevent sync from happening

bug: 78790904
Change-Id: I8e4ef0d331a92b86111882bfe1b68b93a8b5a687
Signed-off-by: Joel Fernandes <joelaf@google.com>
(cherry picked from commit 3944217bbbc8f4344a31e5837e926cec630690cc)
(cherry picked from commit 281c8d49a7a21507afe64f3b44c1753b8a464ac0)
2024-09-27 17:17:48 +03:00
Diep Quynh
6b1c006bfb
sched: tune: Fix initialization with EMS energy model
Since we don't use ARM energy model for EAS, schedtune will be disabled
As a result, any input to dev nodes will cause panics

Instead of disabling, force schedtune to be active with EMS energy model

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
Diep Quynh
c5032402f2
sched: fair: Make cpu_util() public for EMS
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:48 +03:00
Park Bumgyu
420659dd29
[RAMEN9610-11191] sched: ems: check empty of gb-list
Change-Id: I942b1f9cb43f46c3e90511c58c77a5254d45f15c
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
2024-09-27 17:17:48 +03:00
Park Bumgyu
bcc83ee1d9
[RAMEN9610-9421][COMMON] sched: ems: support prefer perf service
Change-Id: Ida3e81c598a22e984839533e62604ffd20c94dc3
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
2024-09-27 17:17:48 +03:00
Park Bumgyu
03fa48b654
[RAMEN9610-9421][COMMON] sched: ems: introduce ems service
Change-Id: I6e0cc8b8db43035c5c933ed292f443c9a67e4520
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
2024-09-27 17:17:48 +03:00
Youngtae Lee
a41d2e351a
[RAMEN9610-9418][COMMON] sched: frt: Fix zero dividing bug
Change-Id: Id738b19174de909113aa8c5224a3e57f1762073d
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
2024-09-27 17:17:47 +03:00
Youngtae Lee
ffaecddd4c
[RAMEN9610-9418][COMMON] sched: frt: fix cpumask warnning bug
Change-Id: I27eb08fdcbe6ce7b35a09a38b2aa2fb4b90e76a7
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
2024-09-27 17:17:47 +03:00
Daeyeong Lee
e1d4ace5ea
[RAMEN9610-9418][COMMON] sched: ems: Disable ontime when capacity of coregroup is zero
Change-Id: I22c3b9d97ca5b5f598436cfb06062b9cb24f2ff6
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
2024-09-27 17:17:47 +03:00
Daeyeong Lee
95b0f687a3
[RAMEN9610-9418][COMMON] sched: ems: Modify get_cpu_max_capacity not to access NULL point
Change-Id: I2a88779e24ba4f30d0423224d3cdc78aea6e586a
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
2024-09-27 17:17:47 +03:00
Sangkyu Kim
b5330d5c93
[RAMEN9610-9418][COMMON] ems: frt: fix initialize variable for check condition
Change-Id: I330d6250f3a8873ffd0bdbb1bca524db6ca56d7d
Signed-off-by: Sangkyu Kim <skwith.kim@samsung.com>
2024-09-27 17:17:47 +03:00
Youngtae Lee
064d7bf938
sched: rt: fix prevent
Change-Id: Ia34ab264f22a956c45a654d9e4d5e737c5629822
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
2024-09-27 17:17:47 +03:00