Update ufs tracepoint symbol list for QCOM.
Bug: 191951106
Signed-off-by: Asutosh Das <asutoshd@codeaurora.org>
Change-Id: Ia95f3bc6d02775fb435e5fd854e355838e8500b1
For memory analysis, need to know all memory-consuming of dma-buf heaps.
But now, other modules can't get defer-free list size.
Export get_freelist_nr_pages to let other modules can get
defer-free list total size.
Bug: 192041645
Change-Id: Icaa1b98e9ab7e330141a92ad147a4e2150c2534b
Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
This patch is based on 1699785. It uses to extend the related interface
to support the request-based operations. We use extension fields in the
parameters of VIDIOC_SUBDEV_S_SELECTION, VIDIOC_SUBDEV_S_FMT and
VIDIOC_SUBDEV_S_FRAME_INTERVAL as request fd.
The driver uses media_request_get_by_fd() to retrieve the media request and
save the pending change in it, so that we can apply the pending change in
req_queue() callback then.
Bug: 191903073
CR-Id:
Signed-off-by: Louis Kuo <louis.kuo@mediatek.com>
Change-Id: Idb7921724cf8febc44b01880a4ad8b7c9272ba6a
When CONFIG_TRACEPOINTS or CONFIG_ANDROID_VENDOR_HOOKS is not set, there
is a build error after commit 01f2392e13 ("ANDROID: logbuf: Add new
logbuf vendor hook to support pr_cont()"):
kernel/printk/printk.c:1962:4: error: implicit declaration of function
'trace_android_vh_logbuf_pr_cont'
[-Werror,-Wimplicit-function-declaration]
trace_android_vh_logbuf_pr_cont(&r, text_len);
^
1 error generated.
Remove the #if directive so that this code always builds properly, which
is possible after commit ba75b92fef ("ANDROID: simplify vendor hooks
for non-GKI builds").
Change-Id: Icc7f55af1becab5a8833b0651402845559b6b56f
Fixes: 01f2392e13 ("ANDROID: logbuf: Add new logbuf vendor hook to support pr_cont()")
Link: https://github.com/ClangBuiltLinux/continuous-integration2/runs/2948254099
Suggested-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Allow io-coherent devices to use a inner writeback read/write allocate,
outer writeback read allocate, no-write allocate cache policy. The outer
cache policy affects the behavior of a system cache, at least on qcom
boards which have one.
The rational follows that of IOMMU_SYS_CACHE_ONLY_NWA. Certain gpu
usecases perform better when using a no-write allocate policy.
Rename the IOMMU_SYS_CACHE_* flags to better reflect that they are not
exclusive with IOMMU_CACHE.
Bug: 191811876
Change-Id: Ic91616a148f39fead008a5b87a54ffd781fee734
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
This patch adds to upload initial symbol list for Exynosauto SoC.
To find what has updated from GKI symbol easily, this list does not
include full list of symbol. So, nothing has added to GKI ABI symbols.
Bug: 192103187
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Change-Id: Iae46da79e06d1081199a8db014b892c74887cbf8
There are debugging modules that monitor the memory usage of tasks
and report memory parameters in some situations (such as OOM).
Bug: 189595202
Change-Id: I6cc405b0f4cbe1706857fc3b2f8da83ea981818d
Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com>
The remoteproc coredump APIs are currently only part of the internal
remoteproc header. This prevents the remoteproc platform drivers from
using these APIs when needed. This change moves the rproc_coredump()
and rproc_coredump_cleanup() APIs to the linux header and marks them
as exported symbols.
Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
Bug: 188764827
Link: https://lore.kernel.org/linux-remoteproc/1623722930-29354-2-git-send-email-sidgup@codeaurora.org/
Change-Id: I8333774acb748fae10e0fd5146b747c4cf2ea6c7
Signed-off-by: Siddharth Gupta <quic_sidgup@quicinc.com>
select_fallback_rq() must return a cpu that is valid for the task.
However, when nid is not -1, it skips checking for
task_cpu_possible_mask().
This causes a problem when execve-ing 32 bit apps on an asymmetric
system where not all cpus are 32 bit capable. During execve-ing
the task is marked as 32 bit long before its affinity mask is
restricted.
If the cpu goes offline during this time, select_fallback_rq()
could return a 64 bit only cpu, which __migrate_tasks()/
is_cpu_allowed() rejects.
migrate_tasks() will therefore continue to pick the same task
repeatedly, where __migrate_tasks() rejects the cpu chosen
by select_fallback_rq() every time, leading to an infinite loop.
Correct the issue by updating select_fallback_rq() for the case
where nid is not -1, ensuring that the returned cpu is always
valid for this task.
Bug: 192050156
Change-Id: Ia073a8395a02485f6d1c1daa0f3ce9e2029cb1f4
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
In order to update cpufreq, vendor modules invoke cpufreq_update_util(),
but when we build our modules, report error:
ERROR: modpost: "cpufreq_update_util_data" [xxx.ko] undefined!
Bug: 192218676
Signed-off-by: Liujie Xie <xieliujie@oppo.com>
Change-Id: Ib1da70229f04b08d8d812d065021dec0bf891e0e
This is technically a backwards incompatible change in behaviour, but I'm
going to argue that it is very unlikely to break things, and likely to fix
*far* more then it breaks.
In no particular order, various reasons follow:
(a) I've long had a bug assigned to myself to debug a super rare kernel crash
on Android Pixel phones which can (per stacktrace) be traced back to BPF clat
IPv6 to IPv4 protocol conversion causing some sort of ugly failure much later
on during transmit deep in the GSO engine, AFAICT precisely because of this
change to gso_size, though I've never been able to manually reproduce it. I
believe it may be related to the particular network offload support of attached
USB ethernet dongle being used for tethering off of an IPv6-only cellular
connection. The reason might be we end up with more segments than max permitted,
or with a GSO packet with only one segment... (either way we break some
assumption and hit a BUG_ON)
(b) There is no check that the gso_size is > 20 when reducing it by 20, so we
might end up with a negative (or underflowing) gso_size or a gso_size of 0.
This can't possibly be good. Indeed this is probably somehow exploitable (or
at least can result in a kernel crash) by delivering crafted packets and perhaps
triggering an infinite loop or a divide by zero... As a reminder: gso_size (MSS)
is related to MTU, but not directly derived from it: gso_size/MSS may be
significantly smaller then one would get by deriving from local MTU. And on
some NICs (which do loose MTU checking on receive, it may even potentially be
larger, for example my work pc with 1500 MTU can receive 1520 byte frames [and
sometimes does due to bugs in a vendor plat46 implementation]). Indeed even just
going from 21 to 1 is potentially problematic because it increases the number
of segments by a factor of 21 (think DoS, or some other crash due to too many
segments).
(c) It's always safe to not increase the gso_size, because it doesn't result in
the max packet size increasing. So the skb_increase_gso_size() call was always
unnecessary for correctness (and outright undesirable, see later). As such the
only part which is potentially dangerous (ie. could cause backwards compatibility
issues) is the removal of the skb_decrease_gso_size() call.
(d) If the packets are ultimately destined to the local device, then there is
absolutely no benefit to playing around with gso_size. It only matters if the
packets will egress the device. ie. we're either forwarding, or transmitting
from the device.
(e) This logic only triggers for packets which are GSO. It does not trigger for
skbs which are not GSO. It will not convert a non-GSO MTU sized packet into a
GSO packet (and you don't even know what the MTU is, so you can't even fix it).
As such your transmit path must *already* be able to handle an MTU 20 bytes
larger then your receive path (for IPv4 to IPv6 translation) - and indeed 28
bytes larger due to IPv4 fragments. Thus removing the skb_decrease_gso_size()
call doesn't actually increase the size of the packets your transmit side must
be able to handle. ie. to handle non-GSO max-MTU packets, the IPv4/IPv6 device/
route MTUs must already be set correctly. Since for example with an IPv4 egress
MTU of 1500, IPv4 to IPv6 translation will already build 1520 byte IPv6 frames,
so you need a 1520 byte device MTU. This means if your IPv6 device's egress
MTU is 1280, your IPv4 route must be 1260 (and actually 1252, because of the
need to handle fragments). This is to handle normal non-GSO packets. Thus the
reduction is simply not needed for GSO packets, because when they're correctly
built, they will already be the right size.
(f) TSO/GSO should be able to exactly undo GRO: the number of packets (TCP
segments) should not be modified, so that TCP's MSS counting works correctly
(this matters for congestion control). If protocol conversion changes the
gso_size, then the number of TCP segments may increase or decrease. Packet loss
after protocol conversion can result in partial loss of MSS segments that the
sender sent. How's the sending TCP stack going to react to receiving ACKs/SACKs
in the middle of the segments it sent?
(g) skb_{decrease,increase}_gso_size() are already no-ops for GSO_BY_FRAGS
case (besides triggering WARN_ON_ONCE). This means you already cannot guarantee
that gso_size (and thus resulting packet MTU) is changed. ie. you must assume
it won't be changed.
(h) changing gso_size is outright buggy for UDP GSO packets, where framing
matters (I believe that's also the case for SCTP, but it's already excluded
by [g]). So the only remaining case is TCP, which also doesn't want it
(see [f]).
(i) see also the reasoning on the previous attempt at fixing this
(commit fa7b83bf3b156c767f3e4a25bbf3817b08f3ff8e) which shows that the current
behaviour causes TCP packet loss:
In the forwarding path GRO -> BPF 6 to 4 -> GSO for TCP traffic, the
coalesced packet payload can be > MSS, but < MSS + 20.
bpf_skb_proto_6_to_4() will upgrade the MSS and it can be > the payload
length. After then tcp_gso_segment checks for the payload length if it
is <= MSS. The condition is causing the packet to be dropped.
tcp_gso_segment():
[...]
mss = skb_shinfo(skb)->gso_size;
if (unlikely(skb->len <= mss)) goto out;
[...]
Thus changing the gso_size is simply a very bad idea. Increasing is unnecessary
and buggy, and decreasing can go negative.
Fixes: 6578171a7f ("bpf: add bpf_skb_change_proto helper")
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dongseok Yi <dseok.yi@samsung.com>
Cc: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/CANP3RGfjLikQ6dg=YpBU0OeHvyv7JOki7CyOUS9modaXAi-9vQ@mail.gmail.com
Link: https://lore.kernel.org/bpf/20210617000953.2787453-2-zenczykowski@gmail.com
(cherry picked from commit 364745fbe981a4370f50274475da4675661104df https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=364745fbe981a4370f50274475da4675661104df )
Test: builds, TreeHugger
Bug: 188690383
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Change-Id: I0ef3174cbd3caaa42d5779334a9c0bfdc9ab81f5
This patch avoids KMI, so later should be reverted to sync with upstream in
next KMI update.
Bug: 192095860
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Change-Id: If861ecde381f23b5d5f18005063ec55356673fdb
This reverts commit ae618c699c.
It causes early device being stuck.
Bug: 192088222
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Change-Id: I5117e7f8aaa9433b74f1531619cdc1b687d02b41
Logging callback symbolic name is generating too many different
messages making Abort analysis miss big trends.
Stick to console reported message providing sufficient information.
Bug: 120445600
Signed-off-by: Thierry Strudel <tstrudel@google.com>
Change-Id: Ic0ea662a60919454060e3a085aeabd8a4099e0b4
Add the rwsem list add vendor hook symbol which is needed for
vendor modules.
Bug: 192041655
Signed-off-by: Huang Yiwei <hyiwei@codeaurora.org>
Change-Id: I838fbb9d067d940e962eff94e8c875c30e153ee1
When watchdog or anr occur, we need to read
dev/binderfs/binder_logs/proc/pid or dev/binderfs/binder_logs/state
node to know the time-consuming information of the binder call.
We need to add the time-consuming information of binder transaction.
Bug: 190413570
Signed-off-by: zhang chuang <zhangchuang3@xiaomi.com>
Change-Id: I0423d4e821d5cd725a848584133dc7245cbc233a
* aosp/upstream-f2fs-stable-linux-5.10.y:
Revert "f2fs: avoid attaching SB_ACTIVE flag during mount/remount"
f2fs: remove false alarm on iget failure during GC
f2fs: enable extent cache for compression files in read-only
f2fs: fix to avoid adding tab before doc section
f2fs: introduce f2fs_casefolded_name slab cache
f2fs: swap: support migrating swapfile in aligned write mode
f2fs: swap: remove dead codes
f2fs: compress: add compress_inode to cache compressed blocks
f2fs: clean up /sys/fs/f2fs/<disk>/features
f2fs: add pin_file in feature list
f2fs: Advertise encrypted casefolding in sysfs
f2fs: Show casefolding support only when supported
f2fs: support RO feature
f2fs: logging neatening
Bug: 186107892
Bug: 190759634
Bug: 190517210
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Change-Id: I8eb93d8a43304b98166676da52a9c2434b15b942
Iova is identified in the alloc vendor hook by its pfn (iova->pfn_lo)
and in free vendor hook by its address (iova->pfn_lo << iova_shift(iovad)).
Change alloc vendor hook to use address as well for consistency.
Bug: 187861158
Change-Id: I8255f3e5899008b80a9f8ed960e2ba948ba13cc2
Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
Pre and post tracepoints in force_compatible_cpus_allowed_ptr() need
to be restricted hooks so that they can sleep.
The old non-restricted versions need to stay in place temporarily for
KMI stability. They will be removed by aosp/1742588.
Bug: 187917024
Change-Id: If630554b1c8fa2e8ccb79c89945c55e17756e6a8
Signed-off-by: Shaleen Agrawal <shalagra@codeaurora.org>
Update ETE yaml file to bring it inline with upstream.
The change is minor and updates the ports property.
Bug: 174685394
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: Ia5bb2ddcbdfa171e998f4f023a0a164ac14f5dd3
Mark the buffer as truncated in irq + minor cleanups picked along the
way.
Bug: 174685394
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: I36111a14ef066b579a28c39e265f3d601a82e18a
Make the comment in line with the upstream version.
Bug: 174685394
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: I8c239d37c96c0c8de1261df6d2a5a2defd43ccfb
For a nvhe host, the EL2 must allow the EL1&0 translation
regime for TraceBuffer (MDCR_EL2.E2TB == 0b11). This must
be saved/restored over a trip to the guest. Also, before
entering the guest, we must flush any trace data if the
TRBE was enabled. And we must prohibit the generation
of trace while we are in EL1 by clearing the TRFCR_EL1.
For vhe, the EL2 must prevent the EL1 access to the Trace
Buffer.
The MDCR_EL2 bit definitions for TRBE are available here :
https://developer.arm.com/documentation/ddi0601/2020-12/AArch64-Registers/
Bug: 174685394
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210405164307.1720226-8-suzuki.poulose@arm.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
(cherry picked from commit a1319260bf62951e279ea228f682bf4b8834a3c2)
[Conflict:
- arch/arm64/kvm/debug.c
- arch/arm64/kvm/hyp/nvhe/debug-sr.c
Trivial conflicts except for one which tried to replce
kvm_arm_setup_mdcr_el2(vcpu);
we rejected that hunk.]
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: I88316a7b2d7b7b23a01914a7de0a2898c4a66afa
At the moment, we check the availability of SPE on the given
CPU (i.e, SPE is implemented and is allowed at the host) during
every guest entry. This can be optimized a bit by moving the
check to vcpu_load time and recording the availability of the
feature on the current CPU via a new flag. This will also be useful
for adding the TRBE support.
Bug: 174685394
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Alexandru Elisei <Alexandru.Elisei@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210405164307.1720226-7-suzuki.poulose@arm.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
(cherry picked from commit d2602bb4f5a450642b96d467e27e6d5d3ef7fa54)
[Conflict in: arch/arm64/kvm/hyp/nvhe/debug-sr.c
Trivial conflict because of an additional __debug_save_trace() call
since a previous version of the series was backported already.]
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: I4ca2bb14f7b647a1e118ea4a8c4313154a482685
Rather than falling to an "unhandled access", inject add an explicit
"undefined access" for TRFCR_EL1 access from the guest.
Bug: 174685394
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210405164307.1720226-6-suzuki.poulose@arm.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
(cherry picked from commit cc427cbb15375f1229e78908064cdff98138b8b1)
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Change-Id: If0efe01af21894ed29cb85338f802ac1f6a194fc
Add SPRD_TIMER config into GKI for the same reason as other vendors have
described before.
Bug: 188373235
Change-Id: I70f1296f11759d9010252a52a88f9329744172f6
Signed-off-by: Orson Zhai <orson.zhai@unisoc.com>
Since LLVM commit 3787ee4, the '-stack-alignment' flag has been dropped
[1], leading to the following error message when building a LTO kernel
with Clang-13 and LLD-13:
ld.lld: error: -plugin-opt=-: ld.lld: Unknown command line argument
'-stack-alignment=8'. Try 'ld.lld --help'
ld.lld: Did you mean '--stackrealign=8'?
It also appears that the '-code-model' flag is not necessary anymore
starting with LLVM-9 [2].
Drop '-code-model' and make '-stack-alignment' conditional on LLD < 13.0.0.
These flags were necessary because these flags were not encoded in the
IR properly, so the link would restart optimizations without them. Now
there are properly encoded in the IR, and these flags exposing
implementation details are no longer necessary.
[1] https://reviews.llvm.org/D103048
[2] https://reviews.llvm.org/D52322
Cc: stable@vger.kernel.org
Link: https://github.com/ClangBuiltLinux/linux/issues/1377
Signed-off-by: Tor Vic <torvic9@mailbox.org>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/f2c018ee-5999-741e-58d4-e482d5246067@mailbox.org
(cherry picked from commit 2398ce80152aae33b9501ef54452e09e8e8d4262)
Change-Id: Icebcd5669e851e2c26e9f807b9d1aeb86e95dcea
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
longterm_pinner/alloc_contig_failed should be 444
for reading for everyone and threshold/failure_traking
should be 644 to allow read/write root.
Bug: 191827366
Signed-off-by: Minchan Kim <minchan@google.com>
Change-Id: I5493c6ae6d39b692282e5428416da80a7d555aa0
PSI accounts stalls for each cgroup separately and aggregates it at each
level of the hierarchy. This causes additional overhead with psi_avgs_work
being called for each cgroup in the hierarchy. psi_avgs_work has been
highly optimized, however on systems with large number of cgroups the
overhead becomes noticeable.
Systems which use PSI only at the system level could avoid this overhead
if PSI can be configured to skip per-cgroup stall accounting.
Add "cgroup_disable=pressure" kernel command-line option to allow
requesting system-wide only pressure stall accounting. When set, it
keeps system-wide accounting under /proc/pressure/ but skips accounting
for individual cgroups and does not expose PSI nodes in cgroup hierarchy.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/patchwork/patch/1435705
(cherry picked from commit 3958e2d0c34e18c41b60dc01832bd670a59ef70f
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git tj)
Bug: 178872719
Bug: 191734423
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: Ifc8fbc52f9a1131d7c2668edbb44c525c76c3360
Flag the rescheduling IPI as 'raw', making sure such interrupt
skips both tick management and irqtime accounting.
Bug: 191808738
Link: https://lore.kernel.org/lkml/20201124141449.572446-5-maz@kernel.org/
Change-Id: Ia9c2b989621eef6f49a1c30a08302a448ae286e6
Signed-off-by: Marc Zyngier <maz@kernel.org>
[minor port to 5.10]
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
Flag the rescheduling IPI as 'raw', making sure such interrupt
skips both tick management and irqtime accounting.
Bug: 191808738
Link: https://lore.kernel.org/lkml/20201124141449.572446-4-maz@kernel.org/
Change-Id: Ibeda817de7618a98d457d09a6fa7e54f867b72f0
Signed-off-by: Marc Zyngier <maz@kernel.org>
[minor port to 5.10]
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
Some interrupts (such as the rescheduling IPI) rely on not going through
the irq_enter()/irq_exit() calls. To distinguish such interrupts, add
a new IRQ flag that allows the low-level handling code to sidestep the
enter()/exit() calls.
Only the architecture code is expected to use this. It will do the wrong
thing on normal interrupts. Note that this is a band-aid until we can
move to some more correct infrastructure (such as kernel/entry/common.c).
Bug: 191808738
Link: https://lore.kernel.org/lkml/20201124141449.572446-3-maz@kernel.org/
Change-Id: I0609a8b689219ba9e769c8b9f7fcf1e77a0ff1ca
Signed-off-by: Marc Zyngier <maz@kernel.org>
[minor port to 5.10]
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
Some arch-specific flags need to be set/cleared, but not exposed to
random device drivers. Introduce a new helper (__irq_modify_status())
that takes an arbitrary mask, and rewrite irq_modify_status() to use
this new helper.
No functionnal change.
Bug: 191808738
Link: https://lore.kernel.org/lkml/20201124141449.572446-5-maz@kernel.org/
Change-Id: I2c2c0d6599d0ab39fad22462bf4c87694362fba8
Signed-off-by: Marc Zyngier <maz@kernel.org>
[minor port to 5.10]
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>