Commit graph

656301 commits

Author SHA1 Message Date
Phapoom Saksri
3887d47484 a 2025-02-28 16:45:13 +07:00
Phapoom Saksri
c9f1b973da bru 2025-02-28 15:57:45 +07:00
Phapoom Saksri
1499b484a2 k 2025-02-28 02:45:00 +07:00
Phapoom Saksri
ee92f8084e forgor 2025-02-28 02:41:43 +07:00
Phapoom Saksri
7df5f247bc susfs v1.5.5 2025-02-28 02:38:17 +07:00
Phapoom Saksri
d08fc9a61c
Bump version to 1.5.4 2025-01-28 00:44:56 +07:00
Phapoom Saksri
d6f64450af No more ksu fetching/this script better. 2025-01-25 18:47:18 +07:00
Phapoom Saksri
48fd4412e8 inode.c pls stop being idiot 2025-01-25 18:42:17 +07:00
Phapoom Saksri
3c119d076a get_cred_rcu backport. (Thanks @backslashxx for introduction) 2025-01-25 18:37:46 +07:00
Phapoom Saksri
57cbd42092 MindPatched v2.5 2025-01-25 18:19:22 +07:00
Phapoom Saksri
b4cf681687 drunk 2025-01-14 01:20:57 +07:00
Phapoom Saksri
4311834fdc susfs4ksu 2025-01-14 00:49:01 +07:00
Phapoom Saksri
717afbe056 Repatch 2025-01-14 00:06:57 +07:00
Phapoom Saksri
157e80e925 KernelSU-Next + SuSFS 1.5.3 Patched 2024-12-28 20:42:59 +07:00
Mustafa Gökmen
1589ebebbd
drivers: gpu: mali: port r51p0 2024-09-27 17:22:11 +03:00
THEBOSS
cada736b53
treewide: Nuke Samsung's senseless loggings/debuggings deeply 2024-09-27 17:22:10 +03:00
xxmustafacooTR
66029da636
gaming_control: move custom freq sets to a workqueue
so function can wait for custom voltage initialization without breaking app openings
2024-09-27 17:20:00 +03:00
xxmustafacooTR
c2850f28c2
gaming_control: add a full thermal bypass switch 2024-09-27 17:20:00 +03:00
xxmustafacooTR
248ac42496
gaming_control: add custom gpu/cpus freq and voltage selections 2024-09-27 17:20:00 +03:00
Michael
a576e6974d
usb: Add Drivedroid Support 2024-09-27 17:20:00 +03:00
S133PY
4284c95429
hid patch 2024-09-27 17:20:00 +03:00
morogoku
d8f12348a5
battery: add unstable power detection
Signed-off-by: morogoku <morogoku@hotmail.com>
2024-09-27 17:20:00 +03:00
morogoku
83dbc07113
battery: Add sec charger controls, updated 2024-09-27 17:20:00 +03:00
xxmustafacooTR
71e430b7c9
gaming_control, mali: more controls, optimizations 2024-09-27 17:20:00 +03:00
Diep Quynh
850830b157
gaming_control: Adapt PM QoS implementation
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:20:00 +03:00
Diep Quynh
983930637f
gaming_control: Make arrays non-static
Static arrays will keep their elements constant, therefore break
apps packages control of the driver

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:20:00 +03:00
Diep Quynh
cd518f863c
gaming_control: Implement API to store running games
There are 2 purposes of this
- To track games' PIDs
- In case the game was killed, its stored PID will be removed
  When there are none of the games are running, turn off gaming
  mode immediately to prevent unwanted frequency lockup due to
  tasks being killed but the driver doesn't know that it was dead

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:20:00 +03:00
Diep Quynh
75f9b626ee
gaming_control: Remove task kill check
In case task_struct was locked out, we can't check its cmdline
If we do, we'll get a soft lockup

Background and top-app check is enough

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:20:00 +03:00
Diep Quynh
609163c324
drivers: Introduce brand new kernel gaming mode
How to trigger gaming mode? Just open a game that is supported in games list
How to exit? Kill the game from recents, or simply back to homescreen

What does this gaming mode do?
- It limits big cluster maximum frequency to 2,0GHz, and little cluster
  to values matching GPU frequencies as below:
+ 338MHz: 455MHz
+ 385MHz and above: 1053MHz
- As for the cluster freq limits, it overcomes heating issue while playing
  heavy games, as well as saves battery juice

Big thanks to [kerneltoast] for the following commits on his wahoo kernel
5ac1e81d3d
e13e2c4554

Gaming control's idea was based on these

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:19:55 +03:00
Slawek
3f3d5aeacd
cpufreq: schedutilX: Introduce initial bringup
Import from linux 4.9 android-common plus additional changes

hardcode up and down rates
2024-09-27 17:19:07 +03:00
THEBOSS619
366405377c
exynos9810: schedutil/KAIR: Sync schedutil to latest patches from Samsung
Taken from Note 10 5G & S10 [Exynos 9825/9820] and fully merged with KAIR feature as they quote "AI based Resource Control"
2024-09-27 17:19:06 +03:00
Viresh Kumar
76f96b92f5
cpufreq: Return 0 from ->fast_switch() on errors
CPUFREQ_ENTRY_INVALID is a special symbol which is used to specify that
an entry in the cpufreq table is invalid. But using it outside of the
scope of the cpufreq table looks a bit incorrect.

We can represent an invalid frequency by writing it as 0 instead if we
need. Note that it is already done that way for the return value of the
->get() callback.

Lets do the same for ->fast_switch() and not use CPUFREQ_ENTRY_INVALID
outside of the scope of cpufreq table.

Also update the comment over cpufreq_driver_fast_switch() to clearly
mention what this returns.

None of the drivers return CPUFREQ_ENTRY_INVALID as of now from
->fast_switch() callback and so we don't need to update any of those.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-09-27 17:17:50 +03:00
THEBOSS619
3c068a19a3
cpufreq_schedutil: account for rt utilizations too 2024-09-27 17:17:50 +03:00
Chris Redpath
cfab345701
ANDROID: sched/rt: Add schedtune accounting to rt task enqueue/dequeue
rt tasks are currently not eligible for schedtune boosting. Make it so
by adding enqueue/dequeue hooks.

For rt tasks, schedtune only acts as a frequency boosting framework, it
has no impact on placement decisions and the prefer_idle attribute is
not used.

Also prepare schedutil use of boosted util for rt task boosting

With this change, schedtune accounting will include rt class tasks,
however boosting currently only applies to the utilization provided by
fair class tasks. Sum up the tracked CPU utilization applying boost to
the aggregate util instead - this includes RT task util in the boosting
if any tasks are runnable.

Scenario 1, considering one CPU:
1x rt task running, util 250, boost 0
1x cfs task runnable, util 250, boost 50
 previous util=250+(50pct_boosted_250) = 887
 new      util=50_pct_boosted_500      = 762

Scenario 2, considering one CPU:
1x rt task running, util 250, boost 50
1x cfs task runnable, util 250, boost 0
 previous util=250+250                 = 500
 new      util=50_pct_boosted_500      = 762

Scenario 3, considering one CPU:
1x rt task running, util 250, boost 50
1x cfs task runnable, util 250, boost 50
 previous util=250+(50pct_boosted_250) = 887
 new      util=50_pct_boosted_500      = 762

Scenario 4:
1x rt task running, util 250, boost 50
 previous util=250                 = 250
 new      util=50_pct_boosted_250  = 637

Change-Id: Ie287cbd0692468525095b5024db9faac8b2f4878
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2024-09-27 17:17:50 +03:00
Diep Quynh
fbebe03e9d
sched: ems: Take current capacity into account when choosing prefer_idle CPUs
We want the CPUs to stay at the lowest OPP as possible, and avoid excessive
task packing, as prefer_idle tasks are latency-sensitive tasks

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:50 +03:00
Miguel de Dios
10116bf572
sched: core: Disable double lock/unlock balance in move_queued_task()
CONFIG_LOCK_STAT shows warnings in move_queued_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.

Bug: 123720375
Change-Id: I8bff8550c4f79ca535556f6ec626f17ff5fce637
Signed-off-by: Miguel de Dios <migueldedios@google.com>
2024-09-27 17:17:50 +03:00
Diep Quynh
5666ed2020
arm64: dts: universal9810: Adapt new EMS data tables
- Introduce new SSE energy data (energy required to run 32-bit applications)
- Adapt new schedutil and EMS PART tables

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:50 +03:00
Diep Quynh
665c080acf
sched: ems: Update EMS to latest beyond2lte source
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Leo Yan
cffbc6b5ce
cpufreq: schedutil: clamp util to CPU maximum capacity
The code is to get the CPU util by accumulate different scheduling
classes and when the total util value is larger than CPU capacity
then it clamps util to CPU maximum capacity. So we can get correct util
value when use PELT signal but if with WALT signal it misses to clamp
util value.

On the other hand, WALT doesn't accumulate different class utilization
but it needs to applying boost margin for WALT signal the CPU util
value is possible to be larger than CPU capacity; so this patch is to
always clamp util to CPU maximum capacity.

Change-Id: I05481ddbf20246bb9be15b6bd21b6ec039015ea8
Signed-off-by: Leo Yan <leo.yan@linaro.org>
2024-09-27 17:17:49 +03:00
Diep Quynh
d880eaa6e7
arm64: dts: universal9810: Adapt Samsung's freqvar boost vector table
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Chris Redpath
082d262afe
ANDROID: Move schedtune en/dequeue before schedutil update triggers
CPU rq util updates happen when rq signals are updated as part of
enqueue and dequeue operations. Doing these updates triggers a call to
the registered util update handler, which takes schedtune boosting
into account. Enqueueing the task in the correct schedtune group after
this happens means that we will potentially not see the boost for an
entire throttle period.

Move the enqueue/dequeue operations for schedtune before the signal
updates which can trigger OPP changes.

Change-Id: I4236e6b194bc5daad32ff33067d4be1987996780
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
506a283fc1
sched: ems: energy: Fix energy CPU selection algorithm
- Consider CPU idleness to avoid excessive task packing, which hurts
overall performance
- Use cpu_util_wake() instead of cpu_util(), fixing miscalculation
that causes the selection code thinking that the CPU is overutilized
- Mark CPU utilization with task util accounted as least utilized
CPU

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
8a2aace408
sched: ems: service: Fix service CPU selection algorithm
1. Consider boosted task utilization when choosing/considering
maximum spare capacity
2. Take advantage of CPU cstates
3. Account of task utilization to avoid overutilizing a CPU

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
18a9c42851
sched: ems: ontime: Consider same exit latency targets on selecting CPUs
Additional condition for CPUs in the same cluster

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
e669bacd88
sched: ems: Consider idle states when choosing prefer_idle CPUs
Shallowest idle CPU has the lowest wake-up latency, therefore it should
have higher priority of being chosen

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
1ca34dc26d
sched: ems: ontime: Honor sync only when target CPU is about to idle
Avoid excessive task packing, which leads to latency due to sync

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Diep Quynh
3752370f61
sched: ems: energy: Account boosted task util on CPU selection path
Boosted tasks should be placed on bigger cluster

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Vincent Guittot
65805a1524
sched/util_est: Fix util_est_dequeue() for throttled cfs_rq
When a cfs_rq is throttled, parent cfs_rq->nr_running is decreased and
everything happens at cfs_rq level. Currently util_est stays unchanged
in such case and it keeps accounting the utilization of throttled tasks.
This can somewhat make sense as we don't dequeue tasks but only throttled
cfs_rq.

If a task of another group is enqueued/dequeued and root cfs_rq becomes
idle during the dequeue, util_est will be cleared whereas it was
accounting util_est of throttled tasks before. So the behavior of util_est
is not always the same regarding throttled tasks and depends of side
activity. Furthermore, util_est will not be updated when the cfs_rq is
unthrottled as everything happens at cfs_rq level. Main results is that
util_est will stay null whereas we now have running tasks. We have to wait
for the next dequeue/enqueue of the previously throttled tasks to get an
up to date util_est.

Remove the assumption that cfs_rq's estimated utilization of a CPU is 0
if there is no running task so the util_est of a task remains until the
latter is dequeued even if its cfs_rq has been throttled.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
Link: http://lkml.kernel.org/r/1528972380-16268-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-09-27 17:17:49 +03:00
Diep Quynh
313724532a
hotplug_governor: Use SD_ASYM_CPUCAPACITY sched domain
The energy aware sched domain is no longer usable with EMS scheduler so
we switch to the next lowest domain

Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
2024-09-27 17:17:49 +03:00
Quentin Perret
3493b7b66f
FROMLIST: sched/topology: Lowest CPU asymmetry sched_domain level pointer
Add another member to the family of per-cpu sched_domain shortcut
pointers. This one, sd_asym_cpucapacity, points to the lowest level
at which the SD_ASYM_CPUCAPACITY flag is set. While at it, rename the
sd_asym shortcut to sd_asym_packing to avoid confusions.

Generally speaking, the largest opportunity to save energy via
scheduling comes from a smarter exploitation of heterogeneous platforms
(i.e. big.LITTLE). Consequently, the sd_asym_cpucapacity shortcut will
be used at first as the lowest domain where Energy-Aware Scheduling
(EAS) should be applied. For example, it is possible to apply EAS within
a socket on a multi-socket system, as long as each socket has an
asymmetric topology. Energy-aware cross-sockets wake-up balancing can
only happen if this_cpu and prev_cpu are in different sockets.

Change-Id: Ie777a1733991d40ce063b318e915199ba3c5416a
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
Message-Id: <20181016101513.26919-7-quentin.perret@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>

[diepquynh]: Backport to 4.4 EAS
2024-09-27 17:17:49 +03:00