* changes:
ufs: qcom: Set clock gating delay to 5ms when resume
net: qrtr: Do not call wait while holding spinlock
usb: dwc3: dwc3-msm-core: Avoid updating pre/post reset if no actconfig
For target with Auto-Hibernate disabled, set the ufs clock
gating delay timer to 5ms when coming out of resume, regardless
of the current gear setting. This is to prevent ufs stay active
longer than necessary when the loading may be low during resume.
In the next clock scaling event, the clock gating delay timer will
be set accordingly based on the ufs loading.
With this change, there is some power saving for certain device
use cases.
Change-Id: I169ecf8d2c864851df80942746014a4352f1acbf
Signed-off-by: Bao D. Nguyen <quic_nguyenb@quicinc.com>
Signed-off-by: Linux Image Build Automation <quic_ibautomat@quicinc.com>
Do not call wait_for_completion holding spinlock inside
qrtr_local_enqueue().This will result in disabling interrupts
for long time if we did not get signal for rx_queue_has_space
completion from qrtr_recvmsg() for control port.
This wait mechanisim we added to avoid the packet drop issue
via the control port for control packets where we are waiting
for the space in the control socket using wait api's. this wait
api will be signaled once control sock has some space.
This patch will removes the wait inside spinlock.
Change-Id: I469c860e1a016348235f11e1e21ed97743325773
Signed-off-by: Sivaji Boddupilli <quic_boddupil@quicinc.com>
Signed-off-by: Linux Image Build Automation <quic_ibautomat@quicinc.com>
During device enumeration failures, the udev->actconfig parameter can be
NULL. Ignore attempting to override the USB SND driver's pre/post reset
callbacks in these cases.
Change-Id: I6ed6758259373b4b890c95cc39d8e63a004b7956
Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
Signed-off-by: Linux Image Build Automation <quic_ibautomat@quicinc.com>
MGLRU has a LRU list for each zone for each type (anon/file) in each
generation:
long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
The min_seq (oldest generation) can progress independently for each
type but the max_seq (youngest generation) is shared for both anon and
file. This is to maintain a common frame of reference.
In order for eviction to advance the min_seq of a type, all the per-zone
lists in the oldest generation of that type must be empty.
The eviction logic only considers pages from eligible zones for
eviction or promotion.
scan_folios() {
...
for (zone = sc->reclaim_idx; zone >= 0; zone--) {
...
sort_folio(); // Promote
...
isolate_folio(); // Evict
}
...
}
Consider the system has the movable zone configured and default 4
generations. The current state of the system is as shown below
(only illustrating one type for simplicity):
Type: ANON
Zone DMA32 Normal Movable Device
Gen 0 0 0 4GB 0
Gen 1 0 1GB 1MB 0
Gen 2 1MB 4GB 1MB 0
Gen 3 1MB 1MB 1MB 0
Now consider there is a GFP_KERNEL allocation request (eligible zone
index <= Normal), evict_folios() will return without doing any work
since there are no pages to scan in the eligible zones of the oldest
generation. Reclaim won't make progress until triggered from a ZONE_MOVABLE
allocation request; which may not happen soon if there is a lot of free
memory in the movable zone. This can lead to OOM kills, although there
is 1GB pages in the Normal zone of Gen 1 that we have not yet tried to
reclaim.
This issue is not seen in the conventional active/inactive LRU since
there are no per-zone lists.
If there are no (not enough) folios to scan in the eligible zones, move
folios from ineligible zone (zone_index > reclaim_index) to the next
generation. This allows for the progression of min_seq and reclaiming
from the next generation (Gen 1).
Qualcomm, Mediatek and raspberrypi [1] discovered this issue independently.
[1] https://github.com/raspberrypi/linux/issues/5395
Link: https://lkml.kernel.org/r/20230802025606.346758-1-kaleshsingh@google.com
Fixes: ac35a49023 ("mm: multi-gen LRU: minimal implementation")
Change-Id: I5bbf44bd7ffe42f4347df4be59a75c1603c9b947
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reported-by: Charan Teja Kalla <quic_charante@quicinc.com>
Reported-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Tested-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> [mediatek]
Tested-by: Charan Teja Kalla <quic_charante@quicinc.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Brian Geffon <bgeffon@google.com>
Cc: Jan Alexander Steffens (heftig) <heftig@archlinux.org>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Oleksandr Natalenko <oleksandr@natalenko.name>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Steven Barrett <steven@liquorix.net>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 1462260adc41c5974362cb54ff577c2a15b8c7b2 https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-unstable)
Bug: 288383787
Bug: 291719697
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Guru Das Srinagesh <quic_gurus@quicinc.com>
There is a single global variable used for a mask of which cpus
to manipulate for single-big-thread (sbt) purposes. This mask is
passed into the halt and resume apis.
This is incorrect, because the halt/resume apis can manipulate the
passed mask, indicating which cpus were actually halted or unhalted
in a given operation. Since the mask gets changed, the code forgets
which cpus to use for sbt.
Update the main sbt check routine such that a local copy of the
cpus to be used for sbt is made, and that local copy passed to
the halt/resume routines.
Change-Id: I9280f1bf600565dc63f0a9d9f84536d50e31fbd6
Signed-off-by: Stephen Dickey <quic_dickey@quicinc.com>
Add tracepoint to see current state of sbt, as well as critical
decision making criteria.
Change-Id: Id24a14714ec9e28469f78be48f12b71725cc8bba
Signed-off-by: Stephen Dickey <quic_dickey@quicinc.com>
For automatic (heavy) pipeline searching the prime cpu will
not get used when there are only a few pipeline entries. This
will leave the prime cpu unoccupied, when one of the pipeline
tasks may be prime worthy.
To handle this, detect if there are any heavy tasks that are
prime worthy already, and promote the prime worthy tasks to
prime. Since pipeline cpus are reassigned every window rollover,
and automatic detection assigns cpus from lowest to highest,
it is unnecessary to demote a task from prime as this will
regardless.
For pipeline (manual) searching the same issue exists and must
be handled.
For the manual case, when the number of pipeline tasks is few,
but a prime_wts has been found, determine if the task on prime
is prime worthy. If it isn't, it must be demoted to non-prime.
Any remaining prime worthy task must then be found.
Change-Id: I15c9417a14c5860bf48edc1c3443fdc0b1255f42
Signed-off-by: Stephen Dickey <quic_dickey@quicinc.com>
In preparation for finding and promoting (and demoting) prime
worth (and unworthy) tasks, create an api that can be
reused to find the pipeline task currently assigned to prime
and the pipeline task that is max demand.
Change-Id: I3b2334482b74c62598a2449e8938c920cfda85b2
Signed-off-by: Stephen Dickey <quic_dickey@quicinc.com>
Ensure that pipeline tasks which have a demand greater than another
pipeline task running on prime only swap with prime after 4 windows.
Change-Id: I28b3e46f476f4f09682ae2caffc2cae04d76fae5
Signed-off-by: Shaleen Agrawal <quic_shalagra@quicinc.com>
Clean up the enum of pipeline types, in an effort to
simplify usage in followup changes.
Change-Id: I451f04fc0b0f4d5f4cf98f071961e1ed451e94cc
Signed-off-by: Shaleen Agrawal <quic_shalagra@quicinc.com>
Since coloc demand represents averaged demand across history of past few
windows, the growth of pipeline task demands will be steadier,
theoretically resulting in less bouncing between prime and other CPUs.
Change-Id: I9a5c92fbc1c26591889e51be1e65273ebe2d27b4
Signed-off-by: Shaleen Agrawal <quic_shalagra@quicinc.com>
pipeline_set_boost does not handle the case properly, where both
manual and auto pipeline hinting are enabled or disabled, independently.
For example, this series of events
pipeline_set_boost(true, MANUAL_PIPELINE);
- auto pipeline enabled
pipeline_set_boost(true, AUTO_PIPELINE);
- auto pipeline disabled\
pipeline_set_boost(false, MANUAL_PIPELINE);
With the above, pipeline boost will be enabled, even though manual
pipeline is no longer requesting it be enabled, and auto pipeline
is disabled.
Correct the code to independently track state of the AUTO or MANUAL
pipeline, and using that information to decide if boost is currently
requested or not.
Change-Id: Ia31eb8cd45b417f55f1e6827953073f708820ce6
Signed-off-by: Stephen Dickey <quic_dickey@quicinc.com>