This reverts commit 4d1ac6a160 as it
causes merge conflicts, and build conflicts in 6.1-rc1 due to upstream
cpuset changes. If this is needed in 6.1, it will have to be reworked
and forward ported in a different way as the structure information has
been changed in 6.1.
Bug: 174125747
Cc: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Iedc328c351830b8b338efb4642aac656b00c5a2a
This reverts commit c8dc4422c6. It causes
merge conflicts with 6.1-rc1. If this is still needed in 6.1, it can
come back after 6.1-rc1 is merged.
Bug: 174125747
Bug: 120444281
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I8b040cf6ba360aad710eb462aabf76d87e803b23
This reverts commit e2db2cceeb.
It causes a build error in 6.1-rc1 as the function being called is no
longer in the kernel tree at all. If this is still needed for 6.1, it
can come back in a workable form.
Bug: 77139736
Bug: 120440023
Cc: Colin Cross <ccross@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I58cee0c364a54e2662efa73e084261a2490f13ba
The merge of the upstream scheduler changes in 6.1-rc1 caused some .h
files to be moved around which broke the sched/vendor_hooks.c file. Fix
this up by adding the needed .h file to the vendor_hooks.c file.
Fixes: c0ccaa13ae ("Merge 30c999937f ("Merge tag 'sched-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip") into android-mainline")
Cc: Todd Kjos <tkjos@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I7f52882f3285eec41443bebea373159e08af2929
This reverts commit 6f8adeb751 as it
causes a lot of merge conflicts in the 6.1-rc1 scheduler merge. If this
is still needed, it can be brought back after 6.1-rc1 is merged.
Bug: 200103201
Bug: 243793188
Cc: Ashay Jaiswal <quic_ashayj@quicinc.com>
Cc: Sai Harshini Nimmala <quic_snimmala@quicinc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I11391151c662920771bb8d4d605bfef728a7a6e8
This reverts commit 1eea1cbdd3 as it
causes a lot of merge conflicts with the scheduler changes in 6.1-rc1.
If this is still needed, it can be added back after 6.1-rc1 is merged.
Bug: 184219858
Cc: Choonghoon Park <choong.park@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I5b92fefb21e605a1edb36f4fb89e2334ac489b4e
This reverts commit efd8dbe42d as it
conflicts in large ways with the scheduler changes in 6.1-rc1. If this
is still needed, it can be added back after the 6.1-rc1 merge is
complete.
Bug: 175808144
Cc: Choonghoon Park <choong.park@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I2beea9e233ae46289b40162c8e26dc6db919e7a9
This reverts commit fd84c02872 as it
conflicts with upstream scheduler changes in 6.1-rc1 in a very large
way. If it is still needed here, it must be reworked and added back
again.
Bug: 188684133
Cc: Huang Yiwei <hyiwei@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Iad842fe0f22204570343b94c870589430e46b7cf
Steps on the way to 6.1-rc1
Resolves merge conflicts in:
arch/arm64/Makefile
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I86d958d92b2441938f4dde17a0f330b8491fec7e
Steps on the way to 6.1-rc1
Resolves conflicts in:
drivers/misc/Makefile
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I94b3e85406106c9746ae026972d855f7910a0df2
Steps on the way to 6.1-rc1
Resolves conflicts in:
drivers/ufs/core/ufshcd.c
include/ufs/ufshcd.h
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ic4a9c1e9a22a8aae818bf726e5fd8433f3a80dfa
Effectively a revert of
commit ad312f95d4 ("fs/select: avoid clang stack usage warning")
Various configs can still push the stack useage of core_sys_select()
over the CONFIG_FRAME_WARN threshold (1024B on 32b targets).
fs/select.c:619:5: error: stack frame size of 1048 bytes in function
'core_sys_select' [-Werror,-Wframe-larger-than=]
core_sys_select() has a large stack allocation for `stack_fds` where it
tries to do something equivalent to "small string optimization" to
potentially avoid a kmalloc.
core_sys_select() calls do_select() which has another potentially large
stack allocation, `table`. Both of these values depend on
FRONTEND_STACK_ALLOC.
Mix those two large allocation with register spills which are
exacerbated by various configs and compiler versions and we can just
barely exceed the 1024B limit.
Rather than keep trying to find the right value of MAX_STACK_ALLOC or
FRONTEND_STACK_ALLOC, mark do_select() as noinline_for_stack.
The intent of FRONTEND_STACK_ALLOC is to help potentially avoid a
dynamic memory allocation. In that spirit, restore the previous
threshold but separate the stack frames.
Many tests of various configs for different architectures and various
versions of GCC were performed; do_select() was never inlined into
core_sys_select() or compat_core_sys_select(). The kernel is built with
the GCC specific flag `-fconserve-stack` which can limit inlining
depending on per-target thresholds of callee stack size, which helps
avoid the issue when using GCC. Clang is being more aggressive and not
considering the stack size when decided whether to inline or not. We may
consider using the clang-16+ flag `-finline-max-stacksize=` in the
future.
Link: https://lore.kernel.org/lkml/20221006222124.aabaemy7ofop7ccz@google.com/
Fixes: ad312f95d4 ("fs/select: avoid clang stack usage warning")
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Bug: 250930332
Link: https://lore.kernel.org/llvm/20221011205547.14553-1-ndesaulniers@google.com/
Change-Id: I97d68bd05ec3ae5f6c09fc29055b9f88735c8cd6
clang-15's ability to elide loops completely became more aggressive when
it can deduce how a variable is being updated in a loop. Counting down
one variable by an increment of another can be replaced by a modulo
operation.
For 64b variables on 32b ARM EABI targets, this can result in the
compiler generating calls to __aeabi_uldivmod, which it does for a do
while loop in float64_rem().
For the kernel, we'd generally prefer that developers not open code 64b
division via binary / operators and instead use the more explicit
helpers from div64.h. On arm-linux-gnuabi targets, failure to do so can
result in linkage failures due to undefined references to
__aeabi_uldivmod().
While developers can avoid open coding divisions on 64b variables, the
compiler doesn't know that the Linux kernel has a partial implementation
of a compiler runtime (--rtlib) to enforce this convention.
It's also undecidable for the compiler whether the code in question
would be faster to execute the loop vs elide it and do the 64b division.
While I actively avoid using the internal -mllvm command line flags, I
think we get better code than using barrier() here, which will force
reloads+spills in the loop for all toolchains.
Link: https://github.com/ClangBuiltLinux/linux/issues/1666
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Bug: 250930332
Link: https://lore.kernel.org/lkml/20221010225342.3903590-1-ndesaulniers@google.com/
Change-Id: I038f06e43420e72927956133a8fe39a3db21a9db
(without this net tests fail, CONFIG_GKI_HACKS_TO_FIX
doesn't work, as it causes compilation failures due
to enabling tons of other things)
Bug: 252915518
Test: TreeHugger, manually with uml net nests
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Change-Id: I9dae7f6be3828a1bdb71560dd9126ebed5cda9e5
Steps on the way to 6.1-rc1
Resolves merge conflicts in:
kernel/power/suspend.c
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I1a8e29f82573bc9927adb04a9de5ef9162062653
Let's try this to avoid lock contention, until we find a better solution.
Bug: 216636351
Bug: 248993649
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
(cherry picked from commit ed0dec098e2c5f92d2578dd25cee57c1cddb499d)
Change-Id: If5bac90af38e7151952c8b842b89449fa584a537
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEq5lC5tSkz8NBJiCnSfxwEqXeA64FAmM++NMACgkQSfxwEqXe
A65f3w//eRwdaZV5eX3m9eb3CsNnnut2dDKNG+HrImd+z+96CbpBCsyZN2p5uDMw
pPownat8Ejv6P6E0ztOAyCsFDnS0Tf2YjdVOZ9txif5zIwqoM8TYbmHlmm7JhACc
hDoblbICTf/bmSURWQOCdkayPhqIyV61pF5hwXXQuCAMoanHzDWbH1yxMmBMCQYJ
P6fA0r2BYniC90o/C0HvToeIw7tTGxBm2Lki/S9cWOFCzPBwQytBbE7AD4rBP8+Y
ryHdcpKaXLF9C1zSlYfyLBbBGR3Oe+DBLl081q3LkTjnnoPbLEtJE1B644K5FiOJ
ySkeHZoMeGB2fisoEJAaEf1GjA1I6f1fcmTlY57XbR/iU3gfQE6+06CwVJBUoqtx
Q71FMU+AMoc1ZfDVQB8NC+RdifV1qRhzVPrawhCPPfx8ngR8yKekh9RYwp0xpGPL
RoAqswoOwOW20BalNxRipLji1URcZGH1d3QgkjdIwxvodyPsiGg74LJ9xBYWccfv
jBS6vNEGgWYUtMA/20W0HowSizA89Rl9REBd7M8q+eLOhJ/AsUgzuJ9noODBe6OV
PO4NDWXwaud64gDHtPhomah/14zej53yomlC/qJ9cJN4uPo6J3u9phqcaOWHjgPX
AKYRGWxCgnwpf7g6v4S/35kU+OEs9fS+oDKUzUY8s7lhNM4qCK0=
=KGwF
-----END PGP SIGNATURE-----
Merge tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld:
- Huawei reported that when they updated their kernel from 4.4 to
something much newer, some userspace code they had broke, the culprit
being the accidental removal of O_NONBLOCK from /dev/random way back
in 5.6. It's been gone for over 2 years now and this is the first
we've heard of it, but userspace breakage is userspace breakage, so
O_NONBLOCK is now back.
- Use randomness from hardware RNGs much more often during early boot,
at the same interval that crng reseeds are done, from Dominik.
- A semantic change in hardware RNG throttling, so that the hwrng
framework can properly feed random.c with randomness from hardware
RNGs that aren't specifically marked as creditable.
A related patch coming to you via Herbert's hwrng tree depends on
this one, not to compile, but just to function properly, so you may
want to merge this PULL before that one.
- A fix to clamp credited bits from the interrupts pool to the size of
the pool sample. This is mainly just a theoretical fix, as it'd be
pretty hard to exceed it in practice.
- Oracle reported that InfiniBand TCP latency regressed by around
10-15% after a change a few cycles ago made at the request of the RT
folks, in which we hoisted a somewhat rare operation (1 in 1024
times) out of the hard IRQ handler and into a workqueue, a pretty
common and boring pattern.
It turns out, though, that scheduling a worker from there has
overhead of its own, whereas scheduling a timer on that same CPU for
the next jiffy amortizes better and doesn't incur the same overhead.
I also eliminated a cache miss by moving the work_struct (and
subsequently, the timer_list) to below a critical cache line, so that
the more critical members that are accessed on every hard IRQ aren't
split between two cache lines.
- The boot-time initialization of the RNG has been split into two
approximate phases: what we can accomplish before timekeeping is
possible and what we can accomplish after.
This winds up being useful so that we can use RDRAND to seed the RNG
before CONFIG_SLAB_FREELIST_RANDOM=y systems initialize slabs, in
addition to other early uses of randomness. The effect is that
systems with RDRAND (or a bootloader seed) will never see any
warnings at all when setting CONFIG_WARN_ALL_UNSEEDED_RANDOM=y. And
kfence benefits from getting a better seed of its own.
- Small systems without much entropy sometimes wind up putting some
truncated serial number read from flash into hostname, so contribute
utsname changes to the RNG, without crediting.
- Add smaller batches to serve requests for smaller integers, and make
use of them when people ask for random numbers bounded by a given
compile-time constant. This has positive effects all over the tree,
most notably in networking and kfence.
- The original jitter algorithm intended (I believe) to schedule the
timer for the next jiffy, not the next-next jiffy, yet it used
mod_timer(jiffies + 1), which will fire on the next-next jiffy,
instead of what I believe was intended, mod_timer(jiffies), which
will fire on the next jiffy. So fix that.
- Fix a comment typo, from William.
* tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
random: clear new batches when bringing new CPUs online
random: fix typos in get_random_bytes() comment
random: schedule jitter credit for next jiffy, not in two jiffies
prandom: make use of smaller types in prandom_u32_max
random: add 8-bit and 16-bit batches
utsname: contribute changes to RNG
random: use init_utsname() instead of utsname()
kfence: use better stack hash seed
random: split initialization into early step and later step
random: use expired timer rather than wq for mixing fast pool
random: avoid reading two cache lines on irq randomness
random: clamp credited irq bits to maximum mixed
random: throttle hwrng writes if no entropy is credited
random: use hwgenerator randomness more frequently at early boot
random: restore O_NONBLOCK support
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEjUuTAak14xi+SF7M4CHKc/GJqRAFAmM6/BMACgkQ4CHKc/GJ
qRBqBAgAh+5JdVkYBxW4MvGEolRw0RDIBNwEwmyJI7WeAegL8FaGI3jmA5Kcww4c
yA+lL/jcS9zQ/qwwHHoCqZoCLDFa43oiDMjSW4MI6oZpV+T6lx5uaH5kXBKsmxy5
2dONP7kYG/eFfBGB6F9qQOLJnCz0CXeY7+O99D1Nldx0yKKUVCK0krb018p5oI6a
RTVRASSVuEGkxvJGo4BbIR1H40s1BKTyRO9eZCKEHSanYM5SVXdBy9GTh5VQWTPk
WLwvXmd0DehZzlPrgg3PMVPBTNGO/yplWibugWyzUqGcPIhQPk6Z76aWE4vojI2q
f0w+86BYR2U7SBV2ZaNrGrxk/PZJyg==
=aDgU
-----END PGP SIGNATURE-----
Merge tag 'slab-for-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab fixes from Vlastimil Babka:
- The "common kmalloc v4" series [1] by Hyeonggon Yoo.
While the plan after LPC is to try again if it's possible to get rid
of SLOB and SLAB (and if any critical aspect of those is not possible
to achieve with SLUB today, modify it accordingly), it will take a
while even in case there are no objections.
Meanwhile this is a nice cleanup and some parts (e.g. to the
tracepoints) will be useful even if we end up with a single slab
implementation in the future:
- Improves the mm/slab_common.c wrappers to allow deleting
duplicated code between SLAB and SLUB.
- Large kmalloc() allocations in SLAB are passed to page allocator
like in SLUB, reducing number of kmalloc caches.
- Removes the {kmem_cache_alloc,kmalloc}_node variants of
tracepoints, node id parameter added to non-_node variants.
- Addition of kmalloc_size_roundup()
The first two patches from a series by Kees Cook [2] that introduce
kmalloc_size_roundup(). This will allow merging of per-subsystem
patches using the new function and ultimately stop (ab)using ksize()
in a way that causes ongoing trouble for debugging functionality and
static checkers.
- Wasted kmalloc() memory tracking in debugfs alloc_traces
A patch from Feng Tang that enhances the existing debugfs
alloc_traces file for kmalloc caches with information about how much
space is wasted by allocations that needs less space than the
particular kmalloc cache provides.
- My series [3] to fix validation races for caches with enabled
debugging:
- By decoupling the debug cache operation more from non-debug
fastpaths, extra locking simplifications were possible and thus
done afterwards.
- Additional cleanup of PREEMPT_RT specific code on top, by Thomas
Gleixner.
- A late fix for slab page leaks caused by the series, by Feng
Tang.
- Smaller fixes and cleanups:
- Unneeded variable removals, by ye xingchen
- A cleanup removing a BUG_ON() in create_unique_id(), by Chao Yu
Link: https://lore.kernel.org/all/20220817101826.236819-1-42.hyeyoo@gmail.com/ [1]
Link: https://lore.kernel.org/all/20220923202822.2667581-1-keescook@chromium.org/ [2]
Link: https://lore.kernel.org/all/20220823170400.26546-1-vbabka@suse.cz/ [3]
* tag 'slab-for-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: (30 commits)
mm/slub: fix a slab missed to be freed problem
slab: Introduce kmalloc_size_roundup()
slab: Remove __malloc attribute from realloc functions
mm/slub: clean up create_unique_id()
mm/slub: enable debugging memory wasting of kmalloc
slub: Make PREEMPT_RT support less convoluted
mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock()
mm/slub: convert object_map_lock to non-raw spinlock
mm/slub: remove slab_lock() usage for debug operations
mm/slub: restrict sysfs validation to debug caches and make it safe
mm/sl[au]b: check if large object is valid in __ksize()
mm/slab_common: move declaration of __ksize() to mm/slab.h
mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using
mm/slab_common: unify NUMA and UMA version of tracepoints
mm/sl[au]b: cleanup kmem_cache_alloc[_node]_trace()
mm/sl[au]b: generalize kmalloc subsystem
mm/slub: move free_debug_processing() further
mm/sl[au]b: introduce common alloc/free functions without tracepoint
mm/slab: kmalloc: pass requests larger than order-1 page to page allocator
mm/slab_common: cleanup kmalloc_large()
...
- No core code changes
- No new clocksource/event driver
- Cleanup of the TI DM clocksource/event driver
- The usual set of device tree binding updates
- Small improvement, fixes and cleanups all over the place
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmM9dgsTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoeg4D/9rJefRINQRwSQuoLfky42SLy5OZXlA
OwAZn0waUNwAXyp8pJd+WO5ZD4lcOmBlgPWNDKW3cLo8+uGzZpHTuTQMBnnHzyek
ny4y4haRySPR8KQF9Jr+JLmMZtQhUESVTUDZYSwtbka1zM3ys6rJOGBDV/fwCCK+
gSdvRjNCN3TtIkNtxWWsAF/Sk8ucK22shztsIRiqL2eCPuKmJ0gZUChnCs69ue+D
K98B7qH/IPMMtVtgX/yTEpSZxyjGVXakW1IvL3Y4Zxwc7G6OGEiDLSmIEj0pGtYr
Vt4yTvauIY7PL212J7agL1EcG7wwFy1pANaeZAIHgpewjc959b0q41Ht2bAMYH1Q
hgNgRCn47v7Iz5Ko/QycTK4NoEqqEZtfwvLMEo55bi0RxcOGDpexQOtaLM1BCtJy
EFOsdQr2NMQguKcn5qvOVW3+HJuLnYsvbHSGCIAoXHB93UtDM/edg8BCNJHGyTtH
9u/iv2hiKPYBjUG95NPo+mictuL5CmSaYNMlSdzDR5L/Eeke3Qp1zMPAxFK6FjTU
eVi6C472n83C/EpGMpfi+ms2UK3E3Dy8QgwNEJN0mTzEHXPUCjdXFWEI72jeYyHy
dV8rSzDg6B82iaZ17eVqCTL4GGyo+kT2a3j3qaUC/rp0MDys7yYkBI6csRTvETdQ
ARwPYYQYkuEEeQ==
=n458
-----END PGP SIGNATURE-----
Merge tag 'timers-core-2022-10-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer updates from Thomas Gleixner:
"A boring time, timekeeping, timers update:
- No core code changes
- No new clocksource/event driver
- Cleanup of the TI DM clocksource/event driver
- The usual set of device tree binding updates
- Small improvement, fixes and cleanups all over the place"
* tag 'timers-core-2022-10-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits)
clocksource/drivers/arm_arch_timer: Fix CNTPCT_LO and CNTVCT_LO value
clocksource/drivers/imx-sysctr: handle nxp,no-divider property
dt-bindings: timer: nxp,sysctr-timer: add nxp,no-divider property
clocksource/drivers/timer-ti-dm: Get clock in probe with devm_clk_get()
clocksource/drivers/timer-ti-dm: Add flag to detect omap1
clocksource/drivers/timer-ti-dm: Move struct omap_dm_timer fields to driver
clocksource/drivers/timer-ti-dm: Use runtime PM directly and check errors
clocksource/drivers/timer-ti-dm: Move private defines to the driver
clocksource/drivers/timer-ti-dm: Simplify register access further
clocksource/drivers/timer-ti-dm: Simplify register writes with dmtimer_write()
clocksource/drivers/timer-ti-dm: Simplify register reads with dmtimer_read()
clocksource/drivers/timer-ti-dm: Drop unused functions
clocksource/drivers/timer-gxp: Add missing error handling in gxp_timer_probe
clocksource/drivers/arm_arch_timer: Fix handling of ARM erratum 858921
clocksource/drivers/exynos_mct: Enable building on ARTPEC
clocksource/drivers/exynos_mct: Support local-timers property
clocksource/drivers/exynos_mct: Support frc-shared property
dt-bindings: timer: exynos4210-mct: Add ARTPEC-8 MCT support
clocksource/drivers/sun4i: Add definition of clear interrupt
clocksource/drivers/renesas-ostm: Add support for RZ/V2L SoC
...
Introduce preempt_[dis|enable_nested() and use it to clean up
various places which have open coded PREEMPT_RT conditionals.
On PREEMPT_RT enabled kernels, spinlocks and rwlocks are neither disabling
preemption nor interrupts. Though there are a few places which depend on
the implicit preemption/interrupt disable of those locks, e.g. seqcount
write sections, per CPU statistics updates etc.
PREEMPT_RT added open coded CONFIG_PREEMPT_RT conditionals to
disable/enable preemption in the related code parts all over the
place. That's hard to read and does not really explain why this is
necessary.
Linus suggested to use helper functions (preempt_disable_nested() and
preempt_enable_nested()) and use those in the affected places. On !RT
enabled kernels these functions are NOPs, but contain a lockdep assert to
validate that preemption is actually disabled to catch call sites which
do not have preemption disabled.
Clean up the affected code paths in mm, dentry and lib.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmM9c8MTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYobrrEADHkvkCUHxRlarfinQY2rxEpC4nbnAg
ibg+LWpDpqqZwkjADExu6+lsbb0mCdvlFyvSPwY2YcQAkj/bkTAXvdf3KjejTl++
B1J5/Cr5lyyKjajjl1efxdORgATBvwuEjR2moJiU868ZR3K4vgflN9n51A0U+NAn
3kOj/TYotFlyDNJeoK/8edqZwKaueXs3fsYGC1aq2X8mQLI4QDeaHUR6R8CU4w+X
bVSIdKNluIYxyc3Eav5sDwzyF6gOSL+9DtZcVyXxJ6+PrkDdkptO23derVHk19WE
ymdAwVX6S37L6HNhJgqeScs+s3xD8KDmvu5ktEAtqC0unBP8JwOFZKCZaaYj91j3
iMjMC4UFcXI5sERWhDXTSja2g0pYV6q3myfYfojxe6xXHlrVs42gCzDpOI4LZncM
lvPfmhb7JR7zEmBEvVyEOX8B16ecWnUqgihU17a3ogGdKW1PRNWcWj3RmNXDmpGD
YZsZSfsawMSJsDIrNRCydXrsiFBNIoVStN7K7c+blnNV8ER5rt24dqCJyUhrl4fB
K8hNvDp+T8N0f6nlIUWk42vjhskEo2ijCnpvHSXQc1UL7WmLfaJf3/T9zlufPwqJ
7yVuWd9vZIb3iVAKz+LqOzLlHcgeJmYlbSBsj+Ay1UHPsNgYulDEKcuNniVoG39u
zFgHu3OmIRueHA==
=3M58
-----END PGP SIGNATURE-----
Merge tag 'sched-rt-2022-10-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull preempt RT updates from Thomas Gleixner:
"Introduce preempt_[dis|enable_nested() and use it to clean up various
places which have open coded PREEMPT_RT conditionals.
On PREEMPT_RT enabled kernels, spinlocks and rwlocks are neither
disabling preemption nor interrupts. Though there are a few places
which depend on the implicit preemption/interrupt disable of those
locks, e.g. seqcount write sections, per CPU statistics updates etc.
PREEMPT_RT added open coded CONFIG_PREEMPT_RT conditionals to
disable/enable preemption in the related code parts all over the
place. That's hard to read and does not really explain why this is
necessary.
Linus suggested to use helper functions (preempt_disable_nested() and
preempt_enable_nested()) and use those in the affected places. On !RT
enabled kernels these functions are NOPs, but contain a lockdep assert
to validate that preemption is actually disabled to catch call sites
which do not have preemption disabled.
Clean up the affected code paths in mm, dentry and lib"
* tag 'sched-rt-2022-10-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
u64_stats: Streamline the implementation
flex_proportions: Disable preemption entering the write section.
mm/compaction: Get rid of RT ifdeffery
mm/memcontrol: Replace the PREEMPT_RT conditionals
mm/debug: Provide VM_WARN_ON_IRQS_ENABLED()
mm/vmstat: Use preempt_[dis|en]able_nested()
dentry: Use preempt_[dis|en]able_nested()
preempt: Provide preempt_[dis|en]able_nested()
- Remove the "ANNOTATE_NOENDBR on ENDBR" warning: it's not
really useful and only found a non-bug false positive so far.
- Properly decode LOOP/LOOPE/LOOPNE, which were missing from
the x86 decoder. Because these instructions are rather
ineffective, they never showed up in compiler output,
but they are simple enough to support, so add them for
completeness.
- A bit more cross-arch preparatory work.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmM/4C4RHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1golg//cWCQqnPy1JDwQIF5zm1oNXAZlcyDDfgp
066NP8IlxajWgkPLunRSzId1ZQW4dppk7ccqI/J6mvNgKJe+nKGO1GXJ94aElxmr
XJdJm8P3R+sYggfVtxMWqZr5cy9JTa7P/4bKDEC7hyP63FXZkOFA7Fk7PZTU/lKm
gQ7IW8CKt+kSqygk8xBTS1gW7EfQWo0hA1mZN75ewwlJP0MJVlECKCfFEjQEVTJb
z4M/iCybJolhiAGktGBM+bj8YijN2/ZvZ9zIC90neMaTLNFY6zMlUbV9WmVT5X+F
1UFlKczNpOxwk3/IGVlwn3IzDToxYk1LMkWc7W7bfcu15aPvJUSR/QJjsRoGsLtX
R4Cpa8dKkvDkbnMEUV9Y93AUehF6WO5ZEzPieCxEQLLieZD+XWQdSIiCugoB3+Kr
BD1n9iE/Gm7IMHAp1yVkNGcMclu4Co9Cc36v71SzfuJznqf+Nv6gm/qy4KCofrZR
C1CCnP/y1Yez4Z0avsbsTqQtCr/TCE7Nf8GmV3fL4rMtxG5eTap7X1GswiGRuL/G
uTe9oKBHuSYBwrVK14/cqtAaPkSy+tIXb2GZUMfdYwx6baLTnz9MGhhXaOyDEOMw
uwt0qd4cXmUqmAhyi+SGRsP4v9HVdmyQFonm8v2gymUltBrb5hWteFOKVPXgt9E+
4wo/pldrr+A=
=D5iK
-----END PGP SIGNATURE-----
Merge tag 'objtool-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull objtool updates from Ingo Molnar:
- Remove the "ANNOTATE_NOENDBR on ENDBR" warning: it's not really
useful and only found a non-bug false positive so far.
- Properly decode LOOP/LOOPE/LOOPNE, which were missing from the x86
decoder. Because these instructions are rather ineffective, they
never showed up in compiler output, but they are simple enough to
support, so add them for completeness.
- A bit more cross-arch preparatory work.
* tag 'objtool-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
objtool,x86: Teach decode about LOOP* instructions
objtool: Remove "ANNOTATE_NOENDBR on ENDBR" warning
objtool: Use arch_jump_destination() in read_intra_function_calls()
- Disable preemption in rwsem_write_trylock()'s attempt to
take the rwsem, to avoid RT tasks hogging the CPU, which
managed to preempt this function after the owner has
been cleared but before a new owner is set. Also add
debug checks to enforce this.
- Add __lockfunc to more slow path functions and add
__sched to semaphore functions.
- Mark spinlock APIs noinline when the respective CONFIG_INLINE_SPIN_*
toggles are disabled, to reduce LTO text size.
- Print more debug information when lockdep gets confused
in look_up_lock_class().
- Improve header file abuse checks.
- Misc cleanups
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmM/3r8RHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1h3fxAAvUfAq4M41aKVDnF1n3e4fZ8MhAQcV7U6
qC+jwS6VII6bd0D2SQseij3BQZqGcg4CqjY7uX/jgcrQHib4haDZn+VlWPacsuN5
yUVNkQNdns6+/fFyLkVJg9HfK7Cw4dgXDUquu/Ivd9YTjtGkGQkJQMa5H5x6NpIF
PcN5B2ynGLt9CBOxqON/SqUIulh58ydUhiPOv0wjgCiCvLXltyCrR57QfX8eY22/
SEzOlbluzp3WBS2beCztKkw1X6woIhhMoCzg2w2WXbvoBr2upKHmIiDoR6U1MUv3
d3iLP4oqmXuN6KQViZsXf7/nulx3NlMkK+9/xLdVbeUiG/F+99AWdiYH5SipFZi0
IxvXPMnl7WE2MxbnL83nbslVoOwxb5M0Ia5VIoJvZnL5HF8P2MQLvSA1XucXJE+f
0JgNb65SFE9SmYLWD8JHOe5whVFf0ccqpuSDVsEzYj18vh/BPbTpVvbrLTp2muSY
uELtGELjgw9zDXFxgC8s3iA6ZzRzcUdCXvrE4ZF+fAjMs3RvtJ66SyRL0R1tevDB
zgsV1oGvgJtKeqaOKqa8IA2OxqQRSIAKzSUFYVmDXG+XXuGrmcWu+LeSetleU3lZ
cS4NAlNSxtWaN6ff9+ULMooSkJQE9pK2FUwc2KNE8vrqn6mP5BeWk4cnA7KtwbYY
fIsO1/F9pIs=
=we4n
-----END PGP SIGNATURE-----
Merge tag 'locking-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
- Disable preemption in rwsem_write_trylock()'s attempt to take the
rwsem, to avoid RT tasks hogging the CPU, which managed to preempt
this function after the owner has been cleared but before a new owner
is set. Also add debug checks to enforce this.
- Add __lockfunc to more slow path functions and add __sched to
semaphore functions.
- Mark spinlock APIs noinline when the respective CONFIG_INLINE_SPIN_*
toggles are disabled, to reduce LTO text size.
- Print more debug information when lockdep gets confused in
look_up_lock_class().
- Improve header file abuse checks.
- Misc cleanups
* tag 'locking-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
locking/lockdep: Print more debug information - report name and key when look_up_lock_class() got confused
locking: Add __sched to semaphore functions
locking/rwsem: Disable preemption while trying for rwsem lock
locking: Detect includes rwlock.h outside of spinlock.h
locking: Add __lockfunc to slow path functions
locking/spinlocks: Mark spinlocks noinline when inline spinlocks are disabled
selftests: futex: Fix 'the the' typo in comment
- PMU driver updates:
- Add AMD Last Branch Record Extension Version 2 (LbrExtV2)
feature support for Zen 4 processors.
- Extend the perf ABI to provide branch speculation information,
if available, and use this on CPUs that have it (eg. LbrExtV2).
- Improve Intel PEBS TSC timestamp handling & integration.
- Add Intel Raptor Lake S CPU support.
- Add 'perf mem' and 'perf c2c' memory profiling support on
AMD CPUs by utilizing IBS tagged load/store samples.
- Clean up & optimize various x86 PMU details.
- HW breakpoints:
- Big rework to optimize the code for systems with hundreds of CPUs and
thousands of breakpoints:
- Replace the nr_bp_mutex global mutex with the bp_cpuinfo_sem
per-CPU rwsem that is read-locked during most of the key operations.
- Improve the O(#cpus * #tasks) logic in toggle_bp_slot()
and fetch_bp_busy_slots().
- Apply micro-optimizations & cleanups.
- Misc cleanups & enhancements.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmM/2pMRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1iIMA/+J+MCEVTt9kwZeBtHoPX7iZ5gnq1+McoQ
f6ALX19AO/ZSuA7EBA3cS3Ny5eyGy3ofYUnRW+POezu9CpflLW/5N27R2qkZFrWC
A09B86WH676ZrmXt+oI05rpZ2y/NGw4gJxLLa4/bWF3g9xLfo21i+YGKwdOnNFpl
DEdCVHtjlMcOAU3+on6fOYuhXDcYd7PKGcCfLE7muOMOAtwyj0bUDBt7m+hneZgy
qbZHzDU2DZ5L/LXiMyuZj5rC7V4xUbfZZfXglG38YCW1WTieS3IjefaU2tREhu7I
rRkCK48ULDNNJR3dZK8IzXJRxusq1ICPG68I+nm/K37oZyTZWtfYZWehW/d/TnPa
tUiTwimabz7UUqaGq9ZptxwINcAigax0nl6dZ3EseeGhkDE6j71/3kqrkKPz4jth
+fCwHLOrI3c4Gq5qWgPvqcUlUneKf3DlOMtzPKYg7sMhla2zQmFpYCPzKfm77U/Z
BclGOH3FiwaK6MIjPJRUXTePXqnUseqCR8PCH/UPQUeBEVHFcMvqCaa15nALed8x
dFi76VywR9mahouuLNq6sUNePlvDd2B124PygNwegLlBfY9QmKONg9qRKOnQpuJ6
UprRJjLOOucZ/N/jn6+ShHkqmXsnY2MhfUoHUoMQ0QAI+n++e+2AuePo251kKWr8
QlqKxd9PMQU=
=LcGg
-----END PGP SIGNATURE-----
Merge tag 'perf-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf events updates from Ingo Molnar:
"PMU driver updates:
- Add AMD Last Branch Record Extension Version 2 (LbrExtV2) feature
support for Zen 4 processors.
- Extend the perf ABI to provide branch speculation information, if
available, and use this on CPUs that have it (eg. LbrExtV2).
- Improve Intel PEBS TSC timestamp handling & integration.
- Add Intel Raptor Lake S CPU support.
- Add 'perf mem' and 'perf c2c' memory profiling support on AMD CPUs
by utilizing IBS tagged load/store samples.
- Clean up & optimize various x86 PMU details.
HW breakpoints:
- Big rework to optimize the code for systems with hundreds of CPUs
and thousands of breakpoints:
- Replace the nr_bp_mutex global mutex with the bp_cpuinfo_sem
per-CPU rwsem that is read-locked during most of the key
operations.
- Improve the O(#cpus * #tasks) logic in toggle_bp_slot() and
fetch_bp_busy_slots().
- Apply micro-optimizations & cleanups.
- Misc cleanups & enhancements"
* tag 'perf-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (75 commits)
perf/hw_breakpoint: Annotate tsk->perf_event_mutex vs ctx->mutex
perf: Fix pmu_filter_match()
perf: Fix lockdep_assert_event_ctx()
perf/x86/amd/lbr: Adjust LBR regardless of filtering
perf/x86/utils: Fix uninitialized var in get_branch_type()
perf/uapi: Define PERF_MEM_SNOOPX_PEER in kernel header file
perf/x86/amd: Support PERF_SAMPLE_PHY_ADDR
perf/x86/amd: Support PERF_SAMPLE_ADDR
perf/x86/amd: Support PERF_SAMPLE_{WEIGHT|WEIGHT_STRUCT}
perf/x86/amd: Support PERF_SAMPLE_DATA_SRC
perf/x86/amd: Add IBS OP_DATA2 DataSrc bit definitions
perf/mem: Introduce PERF_MEM_LVLNUM_{EXTN_MEM|IO}
perf/x86/uncore: Add new Raptor Lake S support
perf/x86/cstate: Add new Raptor Lake S support
perf/x86/msr: Add new Raptor Lake S support
perf/x86: Add new Raptor Lake S support
bpf: Check flags for branch stack in bpf_read_branch_records helper
perf, hw_breakpoint: Fix use-after-free if perf_event_open() fails
perf: Use sample_flags for raw_data
perf: Use sample_flags for addr
...
- Debuggability:
- Change most occurances of BUG_ON() to WARN_ON_ONCE()
- Reorganize & fix TASK_ state comparisons, turn it into a bitmap
- Update/fix misc scheduler debugging facilities
- Load-balancing & regular scheduling:
- Improve the behavior of the scheduler in presence of lot of
SCHED_IDLE tasks - in particular they should not impact other
scheduling classes.
- Optimize task load tracking, cleanups & fixes
- Clean up & simplify misc load-balancing code
- Freezer:
- Rewrite the core freezer to behave better wrt thawing and be simpler
in general, by replacing PF_FROZEN with TASK_FROZEN & fixing/adjusting
all the fallout.
- Deadline scheduler:
- Fix the DL capacity-aware code
- Factor out dl_task_is_earliest_deadline() & replenish_dl_new_period()
- Relax/optimize locking in task_non_contending()
- Cleanups:
- Factor out the update_current_exec_runtime() helper
- Various cleanups, simplifications
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmM/01cRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1geZA/+PB4KC1T9aVxzaTHI36R03YgJYZmIdtxw
wTf02MixePmz+gQCbepJbempGOh5ST28aOcI0xhdYOql5B63MaUBBMlB0HvGUyDG
IU3zETqLMRtAbnSTdQFv8m++ECUtZYp8/x1FCel4WO7ya4ETkRu1NRfCoUepEhpZ
aVAlae9LH3NBaF9t7s0PT2lTjf3pIzMFRkddJ0ywJhbFR3VnWat05fAK+J6fGY8+
LS54coefNlJD4oDh5TY8uniL1j5SmWmmwbk9Cdj7bLU5P3dFSS0/+5FJNHJPVGDE
srGT7wstRUcDrN0CnZo48VIUBiApJCCDqTfJYi9wNYd0NAHvwY6MIJJgEIY8mKsI
L/qH26H81Wt+ezSZ/5JIlGlZ/LIeNaa6OO/fbWEYABBQogvvx3nxsRNUYKSQzumH
CnSBasBjLnjWyLlK4qARM9cI7NFSEK6NUigrEx/7h8JFu/8T4DlSy6LsF1HUyKgq
4+FJLAqG6cL0tcwB/fHYd0oRESN8dStnQhGxSojgufwLc7dlFULvCYF5JM/dX+/V
IKwbOfIOeOn6ViMtSOXAEGdII+IQ2/ZFPwr+8Z5JC7NzvTVL6xlu/3JXkLZR3L7o
yaXTSaz06h1vil7Z+GRf7RHc+wUeGkEpXh5vnarGZKXivhFdWsBdROIJANK+xR0i
TeSLCxQxXlU=
=KjMD
-----END PGP SIGNATURE-----
Merge tag 'sched-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
"Debuggability:
- Change most occurances of BUG_ON() to WARN_ON_ONCE()
- Reorganize & fix TASK_ state comparisons, turn it into a bitmap
- Update/fix misc scheduler debugging facilities
Load-balancing & regular scheduling:
- Improve the behavior of the scheduler in presence of lot of
SCHED_IDLE tasks - in particular they should not impact other
scheduling classes.
- Optimize task load tracking, cleanups & fixes
- Clean up & simplify misc load-balancing code
Freezer:
- Rewrite the core freezer to behave better wrt thawing and be
simpler in general, by replacing PF_FROZEN with TASK_FROZEN &
fixing/adjusting all the fallout.
Deadline scheduler:
- Fix the DL capacity-aware code
- Factor out dl_task_is_earliest_deadline() &
replenish_dl_new_period()
- Relax/optimize locking in task_non_contending()
Cleanups:
- Factor out the update_current_exec_runtime() helper
- Various cleanups, simplifications"
* tag 'sched-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (41 commits)
sched: Fix more TASK_state comparisons
sched: Fix TASK_state comparisons
sched/fair: Move call to list_last_entry() in detach_tasks
sched/fair: Cleanup loop_max and loop_break
sched/fair: Make sure to try to detach at least one movable task
sched: Show PF_flag holes
freezer,sched: Rewrite core freezer logic
sched: Widen TAKS_state literals
sched/wait: Add wait_event_state()
sched/completion: Add wait_for_completion_state()
sched: Add TASK_ANY for wait_task_inactive()
sched: Change wait_task_inactive()s match_state
freezer,umh: Clean up freezer/initrd interaction
freezer: Have {,un}lock_system_sleep() save/restore flags
sched: Rename task_running() to task_on_cpu()
sched/fair: Cleanup for SIS_PROP
sched/fair: Default to false in test_idle_cores()
sched/fair: Remove useless check in select_idle_core()
sched/fair: Avoid double search on same cpu
sched/fair: Remove redundant check in select_idle_smt()
...
Add one CONFIG to control removing the macros or not. On some platform,
configureing out the macros removes the associated members from the
structs, this reduces the object size of the slabs related with the
structs, therefore reduces the total slab memory consumption of system.
Besides, this also reduces vmlinux size a bit, therefore the total
kernel memory size increses a bit.
The macros are ANDROID_KABI_RESERVE, ANDROID_VENDOR_DATA,
ANDROID_VENDOR_DATA_ARRAY, ANDROID_OEM_DATA, ANDROID_OEM_DATA_ARRAY.
Bug: 206561931
Signed-off-by: Qingqing Zhou <quic_qqzhou@quicinc.com>
Change-Id: Iea4b962dff386a17c9bef20ae048be4e17bf43ab
(cherry picked from commit b7a6c15a6f06cc7e324c8d63b87cdd06b5597851)
Signed-off-by: Jaskaran Singh <quic_jasksing@quicinc.com>
(cherry picked from commit 3c06a5ce5e5857a1dff88a784e58e32bbddaf759)
The __UNIQUE_ID() macro causes problems as it turns out to not be
deterministic across different compiler runs as it relies on the
__COUNTER__ macro which could have been used on other .h files previous
to this .h file being included.
This shows up specifically when building with "LTO=thin" vs. "LTO=full"
as different build paths seem to be triggered.
As the structure name isn't really needed at all here, we were just
including it for older compilers that could not handle anonymous
structures in a union, just drop the whole thing which resolves the abi
naming issue.
Bug: 210255585
Reported-by: Giuliano Procida <gprocida@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I6b9449fa9d26ffc5d66b2f0f3b41e2d5f3003f68
(cherry picked from commit f8b361d17da5fa125a9e788d3088599216f33019)
Signed-off-by: Jaskaran Singh <quic_jasksing@quicinc.com>
This header file is to be used for various macros to help make keeping
the kernel ABI "stable" during an "ABI Freeze" period.
They are to be used both before the freeze (to anticipate places where
there will be changes), and after the freeze (to keep the abi stable for
structures where there were changes due to LTS or other changes to the
kernel tree.)
Strongly based on rh_kabi.h from Red Hat's RHEL kernel tree.
This adds support for "real" padding and the ability to replace fields
with other fields.
But, note that ABI changes will still be caught by libabigail at this
point in time, work on that is still ongoing. When that is completed,
all that will be needed is to modify the _ANDROID_KABI_RESERVE() macro
in this file. No other file changes should be needed.
Bug: 151154716
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I77038cc251c819c3ed22a9cb8843b185416b6727
(cherry picked from commit d2eb8b0028ecf947ba2fe64bb7a46b6284e0fa99)
Signed-off-by: Jaskaran Singh <quic_jasksing@quicinc.com>
After the ucount rlimit code was merged a bunch of small but
siginificant bugs were found and fixed. At the time it was realized
that part of the problem was that while the ucount rlimits were very
similar to the oridinary ucounts (in being nested counts with limits)
the semantics were slightly different and the code would be less error
prone if there was less sharing. This is the long awaited cleanup
that should hopefully keep things more comprehensible and less error
prone for whoever needs to touch that code next.
Alexey Gladkov (1):
ucounts: Split rlimit and ucount values and max values
fs/exec.c | 2 +-
fs/proc/array.c | 2 +-
include/linux/user_namespace.h | 35 ++++++++++++++++++++++-------------
kernel/fork.c | 12 ++++++------
kernel/sys.c | 2 +-
kernel/ucount.c | 34 +++++++++++++++-------------------
kernel/user_namespace.c | 10 +++++-----
7 files changed, 51 insertions(+), 46 deletions(-)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEgjlraLDcwBA2B+6cC/v6Eiajj0AFAmM7U4cACgkQC/v6Eiaj
j0AbRA//RVrGJ9n5iYyHM7WgeoTlFbaupEyLTq5dEpkOMD9CEB4OpMymGA/VXbeX
cjgF5dqykfrdpYBwJdosl1fgq15ZFe9ChKhPGQkI5CGlwyRYTl2kq+FrZLC790s8
c4TN3fKO1DyQPn5+UNzlBgLP8ofiUqeScZJDGa+LeMlUIv1OFS3m05jHuG/uzl6b
bbbdcn61tFKOFCapbE72hWusEQssPOAN+dSY1/lwKO05WOKR0N2CR0EHyZhW2Owd
GIQ27Zh5ed/9xRNlxa8VIa+JDfuATbPeoWcvRmiWSEoAxKtPBUf8lwcltlHBUcKK
72MH+KU9AaIZ1prq9ng4xEaM+vXiSSNspYB8siwph7au1gWx1Yu2yYVavEPeFB9o
C0JaD7kTh6Mhk6xdPhnmFUHFOLLGC5LdnBcIwwoMb1jlwP4QJRVucbjpqaOptoiE
SeWhRRKUBwpcQdztQZCR+X0h1paHRJJXplHFmeEGcMviGWntgKUaxXJQ4BJrnRTO
pagn7h181KVF7u9Toh0IWzrd322mXNqmcgwhzE/S9pa5EJMQHt7qYkDzQCgEwoap
JmIld9tKkv/0fYMOHordjMb1OY37feI7FyDAuZuLP1ZWYgKhOq0LrD5x8PzKmoyM
6oKAOfXZUVT/Pnw21nEzAtHsazV3mLRpW+gLLiLSiWoSaYT4x14=
=kjVh
-----END PGP SIGNATURE-----
Merge tag 'ucount-rlimits-cleanups-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull ucounts update from Eric Biederman:
"Split rlimit and ucount values and max values
After the ucount rlimit code was merged a bunch of small but
siginificant bugs were found and fixed. At the time it was realized
that part of the problem was that while the ucount rlimits were very
similar to the oridinary ucounts (in being nested counts with limits)
the semantics were slightly different and the code would be less error
prone if there was less sharing.
This is the long awaited cleanup that should hopefully keep things
more comprehensible and less error prone for whoever needs to touch
that code next"
* tag 'ucount-rlimits-cleanups-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
ucounts: Split rlimit and ucount values and max values
Recently I had a conversation where it was pointed out to me that
SIGKILL sent to a tracee stropped in PTRACE_EVENT_EXIT is quite
difficult for a tracer to handle.
Keeping SIGKILL working after the process has been killed is pain
from an implementation point of view.
So since the debuggers don't want this behavior let's see if we can
remove this wart for the userspace API
If a regression is detected it should only need to be the last change
that is the reverted. The other two are just general cleanups that
make the last patch simpler.
Eric W. Biederman (3):
signal: Ensure SIGNAL_GROUP_EXIT gets set in do_group_exit
signal: Guarantee that SIGNAL_GROUP_EXIT is set on process exit
signal: Drop signals received after a fatal signal has been processed
fs/coredump.c | 2 +-
include/linux/sched/signal.h | 1 +
kernel/exit.c | 20 +++++++++++++++++++-
kernel/fork.c | 2 ++
kernel/signal.c | 3 ++-
5 files changed, 25 insertions(+), 3 deletions(-)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEgjlraLDcwBA2B+6cC/v6Eiajj0AFAmM7U40ACgkQC/v6Eiaj
j0B7eRAAst6EW4nkxiDBN/PRo43Kz7tkYCZGLAq73vZAaKkyTFSwghT85cZwTsuc
px/iDPWZpSGdIeHhdt1giJeuFG0FuoMR9prQVx3z/fM6KLXEJb86OtW1c1uW00Bh
TkhVPQiF/9bc+Eb/nRKF61NA4sP0OXwO1+t/zBL4clekO9vFLP1DpRBE9OrNlHq2
NlDqoPqq6SsKYG8f+J2LJKKzRWLICvHCtz4uUt6O11Wt/PH2TZYdYnXf/2vvZyzS
SOs8kQjw4QPSptcNH/LIZz0WNHagn1uU/2/T106LOgPgcr515T4MhqOK+Wp95BjA
0RrhsfdMD+7HArqLmt9VKgKO9Gs+T/M6jQZgpUyzlw3qPorKUGu4s9UnUgS7l4uz
oNV9no1Ei2fQ+YR6RBR74579a45FWkqRBsaia59KFCzZxRsQz8VB1cqcIgl9dFUA
f81qt9FiX8duVYZIcoT79BvV1bQ3LGwyygrH+X3sDTQSdN/aeZ24JdGP2MNtYO/c
jWN/mMVHc1xOsYmACVGETWhZf0Y7lGGBYSIJ92jT2HxIuEiqmFE4kngRjKcYgL/G
1/o9z8VntMq5t+ZcQY5WfK/WpSeSPXa6TsP80PEm/hvdMwZLEO4IzCR+gPJB6oJS
KPQ3EZfRCB2Jb1qBmcpbleA/RBA0iJUZtBvtIPI7QmMTR+VRdhc=
=gX4y
-----END PGP SIGNATURE-----
Merge tag 'signal-for-v5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull ptrace update from Eric Biederman:
"ptrace: Stop supporting SIGKILL for PTRACE_EVENT_EXIT
Recently I had a conversation where it was pointed out to me that
SIGKILL sent to a tracee stropped in PTRACE_EVENT_EXIT is quite
difficult for a tracer to handle.
Keeping SIGKILL working after the process has been killed is pain from
an implementation point of view.
So since the debuggers don't want this behavior let's see if we can
remove this wart for the userspace API
If a regression is detected it should only need to be the last change
that is the reverted. The other two are just general cleanups that
make the last patch simpler"
* tag 'signal-for-v5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
signal: Drop signals received after a fatal signal has been processed
signal: Guarantee that SIGNAL_GROUP_EXIT is set on process exit
signal: Ensure SIGNAL_GROUP_EXIT gets set in do_group_exit
A fix for a unlikely but possible memory leak.
Hangyu Hua (1):
ipc: mqueue: fix possible memory leak in init_mqueue_fs()
ipc/mqueue.c | 1 +
1 file changed, 1 insertion(+)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEgjlraLDcwBA2B+6cC/v6Eiajj0AFAmM7U48ACgkQC/v6Eiaj
j0DMEw//R1u/xUW8ytS3/G3AHXsJ0df4GgcBr3MsYsVz/HXnVTbVFyz8BodX6mdD
q6cPFlzBf1wYwh7DtDqAVOEvEZg8yiAyBX4RKkVvfD35J+8GCROQ2mnkeo3AHdNp
jIGw8IimhFjiQr/gbc5WQxox+sH/l1O/6ksDl11JYEnWs5lJgsh5Wyg6cp5d7Wd3
p9B9AKzhqby+6JMLuucsaLqS/g+dZ218Y1ghhLIMrK/UEZ9+DAX9GWK3ebr+1Xxw
mQnaGZtPWXkMwVTr69fED56/Q0HC/KhIMLhuGygX/zfTmU3iZjeChUCOgM/18kFZ
vUvy7G0R9uMgm1lgjCvmQW9gO9jnIDKHeB1qPVgmUNpvpcFCjzJ2kRdnLfGkGp1j
Ot6ZQMJOSMvLPUT8M4LZrMfFJQZYy8Jy2Oh97Ryf80ijGdFigS1oQ4BD9q7hMCdM
DDBP4I4a2eB6Z2tIyMEQ9fx74lhOLI9pO22gG5JX4Fxhd9i1tBfJrjzKXPOQyO+y
A7bSL1NyVjvxA6IkeSHyBsL8jl8ntcg6kwoxIfjfcmmMRTrK2rnfbXhSrmTflfP7
WUv5p+sd6NT4DAZ2rqvNB8BYzSelLGbDpXg77nqDJk2UjQDcA0MOl9c91vJsGaIn
paoiCm28yIRxFkCHY0RIZSlLp2t32ZwIzhvcC5YOQW5W3VwyjXY=
=tjwH
-----END PGP SIGNATURE-----
Merge tag 'retire_mq_sysctls-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull mqueue fix from Eric Biederman:
"A fix for an unlikely but possible memory leak"
* tag 'retire_mq_sysctls-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
ipc: mqueue: fix possible memory leak in init_mqueue_fs()