FROMLIST: sched/pi: Reweight fair_policy() tasks when inheriting prio
For fair tasks inheriting the priority (nice) without reweighting is a NOP as the task's share won't change. This is visible when running with PTHREAD_PRIO_INHERIT where fair tasks with low priority values are susceptible to starvation leading to PI like impact on lock contention. The logic in rt_mutex will reset these low priority fair tasks into nice 0, but without the additional reweight operation to actually update the weights, it doesn't have the desired impact of boosting them to allow them to run sooner/longer to release the lock. Apply the reweight for fair_policy() tasks to achieve the desired boost for those low nice values tasks. Note that boost here means resetting their nice to 0; as this is what the current logic does for fair tasks. We need to re-instate ordering fair tasks by their priority order on the waiter tree to ensure we inherit the top_waiter properly. Handling of idle_policy() requires more code refactoring and is not handled yet. idle_policy() are treated specially and only run when the CPU is idle and get a hardcoded low weight value. Changing weights won't be enough without a promotion first to SCHED_OTHER. Tested with a test program that creates three threads. 1. main thread that spawns high prio and low prio task and busy loops 2. low priority thread that holds a pthread_mutex() with PTHREAD_PRIO_INHERIT protocol. Runs at nice +10. Busy loops after holding the lock. 3. high priority thread that holds a pthread_mutex() with PTHREADPTHREAD_PRIO_INHERIT, but made to start after the low priority thread. Runs at nice 0. Should remain blocked by the low priority thread. All tasks are pinned to CPU0. Without the patch I can see the low priority thread running only for ~10% of the time which is what expected without it being boosted. With the patch the low priority thread runs for ~50% which is what expected if it gets boosted to nice 0. I modified the test program logic afterwards to ensure that after releasing the lock the low priority thread goes back to running for 10% of the time, and it does. Bug: 263876335 Link: https://lore.kernel.org/lkml/20240514160711.hpdg64grdwc43ux7@airbuntu/ Reported-by: Yabin Cui <yabinc@google.com> Signed-off-by: Qais Yousef <qyousef@layalina.io> [Fix trivial conflict with vendor hook] Signed-off-by: Qais Yousef <qyousef@google.com> Change-Id: Ia954ee528495b5cf5c3a2157c68b4a757cef1f83 (cherry picked from commit 23ac35ed8fc6220e4e498a21d22a9dbe67e7da9b) Signed-off-by: Qais Yousef <qyousef@google.com>
This commit is contained in:
parent
b1e11ffd90
commit
4ed706c20a
@ -326,17 +326,13 @@ static __always_inline bool unlock_rt_mutex_safe(struct rt_mutex_base *lock,
|
||||
|
||||
static __always_inline int __waiter_prio(struct task_struct *task)
|
||||
{
|
||||
int prio = task->prio;
|
||||
int waiter_prio = 0;
|
||||
|
||||
trace_android_vh_rtmutex_waiter_prio(task, &waiter_prio);
|
||||
if (waiter_prio > 0)
|
||||
return waiter_prio;
|
||||
|
||||
if (!rt_prio(prio))
|
||||
return DEFAULT_PRIO;
|
||||
|
||||
return prio;
|
||||
return task->prio;
|
||||
}
|
||||
|
||||
static __always_inline void
|
||||
|
@ -7196,8 +7196,10 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
|
||||
} else {
|
||||
if (dl_prio(oldprio))
|
||||
p->dl.pi_se = &p->dl;
|
||||
if (rt_prio(oldprio))
|
||||
else if (rt_prio(oldprio))
|
||||
p->rt.timeout = 0;
|
||||
else if (!task_has_idle_policy(p))
|
||||
reweight_task(p, prio - MAX_RT_PRIO);
|
||||
}
|
||||
|
||||
__setscheduler_prio(p, prio);
|
||||
|
Loading…
Reference in New Issue
Block a user