A single update for irqpoll:

- Ensure that a raised soft interrupt is handled after pulling the
     blk_cpu_iopoll backlog from a unplugged CPU. This prevents that the CPU
     which runs that code reaches idle with soft interrupts pending.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmKLNIcTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoVbUD/wMntdFCvdNsSm3klezrWQLTTNv0tfF
 /Lk2Kksc4/IUR+8cwpv/CJU47AArF3TIN7AH2sCaPoD4Se7KNHx/9O/6P2uF1dHk
 zHILMChjxr9Ntbtwfg5fkSkW88MKSiDXafIcmy41MY7eCOBGrBnT7lmUqqUCzEs3
 i0yg9ERYSIpKWyeFekq+Q6dMB6DZ84U5oCUGsteejK3DW3LwEfx7YseCWxYPUnK+
 ShtwIB50zKOIMj8XwWzXdjDJUy2bLyEvSFv6j6JBOrmS9CJzjR16WegOcGOxcVM4
 fhH0RPx6z5S8nvi3Z5IsIs2eBG/WcRMzw28Hpc/93pt1Yp/RamrEOhYVRHoapaZF
 9K1l0JcmRTiQUehvARPEBORy2y3qmiqv+W4ETLcNG2IW8c+AjNh08gDMmyh099Ah
 RL09PuVIdTNHz720r5YoDdedqFvShSTVgPCxPRW6gFpcVTi3zFzRfyRJHFErCI6+
 Gd717lrfUKLeZ8+GNaixbP9Kbm3Oe9pnLRorIRr2oRSxeB7Nebt33KC2DtblKopg
 FNblmm7D6oysO6+iDAM2N9m8JqhxFMG6HJJIv3FiqnLQ0DDMcokZDMfZiaVZmeTc
 v3RvnQN2TVi4Y+scIRlp9NlA4njOEzhLNwuY4IhBwSFg8DY86iY1w9HRu81xpRXD
 06IKLvRRJ4KwYg==
 =3NG5
 -----END PGP SIGNATURE-----

Merge tag 'core-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irqpoll update from Thomas Gleixner:
 "A single update for irqpoll:

  Ensure that a raised soft interrupt is handled after pulling the
  blk_cpu_iopoll backlog from a unplugged CPU. This prevents that the
  CPU which runs that code reaches idle with soft interrupts pending"

* tag 'core-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  lib/irq_poll: Prevent softirq pending leak in irq_poll_cpu_dead()
This commit is contained in:
Linus Torvalds 2022-05-23 16:37:35 -07:00
commit 4b57dccc42

View File

@ -188,14 +188,18 @@ EXPORT_SYMBOL(irq_poll_init);
static int irq_poll_cpu_dead(unsigned int cpu)
{
/*
* If a CPU goes away, splice its entries to the current CPU
* and trigger a run of the softirq
* If a CPU goes away, splice its entries to the current CPU and
* set the POLL softirq bit. The local_bh_disable()/enable() pair
* ensures that it is handled. Otherwise the current CPU could
* reach idle with the POLL softirq pending.
*/
local_bh_disable();
local_irq_disable();
list_splice_init(&per_cpu(blk_cpu_iopoll, cpu),
this_cpu_ptr(&blk_cpu_iopoll));
__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
local_irq_enable();
local_bh_enable();
return 0;
}